Docs: Backport fixes for broken links (#10439)

diff --git a/1.4.0/docs/configuration.md b/1.4.0/docs/configuration.md
index 91ccffd..1b40129 100644
--- a/1.4.0/docs/configuration.md
+++ b/1.4.0/docs/configuration.md
@@ -124,16 +124,16 @@
 
 Iceberg catalogs support using catalog properties to configure catalog behaviors. Here is a list of commonly used catalog properties:
 
-| Property                          | Default            | Description                                            |
-| --------------------------------- | ------------------ | ------------------------------------------------------ |
-| catalog-impl                      | null               | a custom `Catalog` implementation to use by an engine  |
-| io-impl                           | null               | a custom `FileIO` implementation to use in a catalog   |
-| warehouse                         | null               | the root path of the data warehouse                    |
-| uri                               | null               | a URI string, such as Hive metastore URI               |
-| clients                           | 2                  | client pool size                                       |
-| cache-enabled                     | true               | Whether to cache catalog entries |
-| cache.expiration-interval-ms      | 30000              | How long catalog entries are locally cached, in milliseconds; 0 disables caching, negative values disable expiration |
-| metrics-reporter-impl | org.apache.iceberg.metrics.LoggingMetricsReporter | Custom `MetricsReporter` implementation to use in a catalog. See the [Metrics reporting](metrics-reporting) section for additional details |
+| Property                          | Default            | Description                                                                                                                                   |
+| --------------------------------- | ------------------ |-----------------------------------------------------------------------------------------------------------------------------------------------|
+| catalog-impl                      | null               | a custom `Catalog` implementation to use by an engine                                                                                         |
+| io-impl                           | null               | a custom `FileIO` implementation to use in a catalog                                                                                          |
+| warehouse                         | null               | the root path of the data warehouse                                                                                                           |
+| uri                               | null               | a URI string, such as Hive metastore URI                                                                                                      |
+| clients                           | 2                  | client pool size                                                                                                                              |
+| cache-enabled                     | true               | Whether to cache catalog entries                                                                                                              |
+| cache.expiration-interval-ms      | 30000              | How long catalog entries are locally cached, in milliseconds; 0 disables caching, negative values disable expiration                          |
+| metrics-reporter-impl | org.apache.iceberg.metrics.LoggingMetricsReporter | Custom `MetricsReporter` implementation to use in a catalog. See the [Metrics reporting](metrics-reporting.md) section for additional details |
 
 `HadoopCatalog` and `HiveCatalog` can access the properties in their constructors.
 Any other custom catalog can access the properties by implementing `Catalog.initialize(catalogName, catalogProperties)`.
diff --git a/1.4.0/docs/spark-configuration.md b/1.4.0/docs/spark-configuration.md
index ef392f4..e8e9182 100644
--- a/1.4.0/docs/spark-configuration.md
+++ b/1.4.0/docs/spark-configuration.md
@@ -178,19 +178,19 @@
     .insertInto("catalog.db.table")
 ```
 
-| Spark option           | Default                    | Description                                                  |
-| ---------------------- | -------------------------- | ------------------------------------------------------------ |
-| write-format           | Table write.format.default | File format to use for this write operation; parquet, avro, or orc |
-| target-file-size-bytes | As per table property      | Overrides this table's write.target-file-size-bytes          |
-| check-nullability      | true                       | Sets the nullable check on fields                            |
-| snapshot-property._custom-key_    | null            | Adds an entry with custom-key and corresponding value in the snapshot summary (the `snapshot-property.` prefix is only required for DSv2)  |
-| fanout-enabled       | false        | Overrides this table's write.spark.fanout.enabled  |
-| check-ordering       | true        | Checks if input schema and table schema are same  |
-| isolation-level | null | Desired isolation level for Dataframe overwrite operations.  `null` => no checks (for idempotent writes), `serializable` => check for concurrent inserts or deletes in destination partitions, `snapshot` => checks for concurrent deletes in destination partitions. |
-| validate-from-snapshot-id | null | If isolation level is set, id of base snapshot from which to check concurrent write conflicts into a table. Should be the snapshot before any reads from the table. Can be obtained via [Table API](../../api#table-metadata) or [Snapshots table](../spark-queries#snapshots). If null, the table's oldest known snapshot is used. |
-| compression-codec      | Table write.(fileformat).compression-codec | Overrides this table's compression codec for this write      |
-| compression-level      | Table write.(fileformat).compression-level | Overrides this table's compression level for Parquet and Avro tables for this write |
-| compression-strategy   | Table write.orc.compression-strategy       | Overrides this table's compression strategy for ORC tables for this write |
+| Spark option           | Default                    | Description                                                                                                                                                                                                                                                                                                                      |
+| ---------------------- | -------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| write-format           | Table write.format.default | File format to use for this write operation; parquet, avro, or orc                                                                                                                                                                                                                                                               |
+| target-file-size-bytes | As per table property      | Overrides this table's write.target-file-size-bytes                                                                                                                                                                                                                                                                              |
+| check-nullability      | true                       | Sets the nullable check on fields                                                                                                                                                                                                                                                                                                |
+| snapshot-property._custom-key_    | null            | Adds an entry with custom-key and corresponding value in the snapshot summary (the `snapshot-property.` prefix is only required for DSv2)                                                                                                                                                                                        |
+| fanout-enabled       | false        | Overrides this table's write.spark.fanout.enabled                                                                                                                                                                                                                                                                                |
+| check-ordering       | true        | Checks if input schema and table schema are same                                                                                                                                                                                                                                                                                 |
+| isolation-level | null | Desired isolation level for Dataframe overwrite operations.  `null` => no checks (for idempotent writes), `serializable` => check for concurrent inserts or deletes in destination partitions, `snapshot` => checks for concurrent deletes in destination partitions.                                                            |
+| validate-from-snapshot-id | null | If isolation level is set, id of base snapshot from which to check concurrent write conflicts into a table. Should be the snapshot before any reads from the table. Can be obtained via [Table API](api.md#table-metadata) or [Snapshots table](spark-queries.md#snapshots). If null, the table's oldest known snapshot is used. |
+| compression-codec      | Table write.(fileformat).compression-codec | Overrides this table's compression codec for this write                                                                                                                                                                                                                                                                          |
+| compression-level      | Table write.(fileformat).compression-level | Overrides this table's compression level for Parquet and Avro tables for this write                                                                                                                                                                                                                                              |
+| compression-strategy   | Table write.orc.compression-strategy       | Overrides this table's compression strategy for ORC tables for this write                                                                                                                                                                                                                                                        |
 
 CommitMetadata provides an interface to add custom metadata to a snapshot summary during a SQL execution, which can be beneficial for purposes such as auditing or change tracking. If properties start with `snapshot-property.`, then that prefix will be removed from each property. Here is an example:
 
diff --git a/1.4.1/docs/configuration.md b/1.4.1/docs/configuration.md
index 91ccffd..1b40129 100644
--- a/1.4.1/docs/configuration.md
+++ b/1.4.1/docs/configuration.md
@@ -124,16 +124,16 @@
 
 Iceberg catalogs support using catalog properties to configure catalog behaviors. Here is a list of commonly used catalog properties:
 
-| Property                          | Default            | Description                                            |
-| --------------------------------- | ------------------ | ------------------------------------------------------ |
-| catalog-impl                      | null               | a custom `Catalog` implementation to use by an engine  |
-| io-impl                           | null               | a custom `FileIO` implementation to use in a catalog   |
-| warehouse                         | null               | the root path of the data warehouse                    |
-| uri                               | null               | a URI string, such as Hive metastore URI               |
-| clients                           | 2                  | client pool size                                       |
-| cache-enabled                     | true               | Whether to cache catalog entries |
-| cache.expiration-interval-ms      | 30000              | How long catalog entries are locally cached, in milliseconds; 0 disables caching, negative values disable expiration |
-| metrics-reporter-impl | org.apache.iceberg.metrics.LoggingMetricsReporter | Custom `MetricsReporter` implementation to use in a catalog. See the [Metrics reporting](metrics-reporting) section for additional details |
+| Property                          | Default            | Description                                                                                                                                   |
+| --------------------------------- | ------------------ |-----------------------------------------------------------------------------------------------------------------------------------------------|
+| catalog-impl                      | null               | a custom `Catalog` implementation to use by an engine                                                                                         |
+| io-impl                           | null               | a custom `FileIO` implementation to use in a catalog                                                                                          |
+| warehouse                         | null               | the root path of the data warehouse                                                                                                           |
+| uri                               | null               | a URI string, such as Hive metastore URI                                                                                                      |
+| clients                           | 2                  | client pool size                                                                                                                              |
+| cache-enabled                     | true               | Whether to cache catalog entries                                                                                                              |
+| cache.expiration-interval-ms      | 30000              | How long catalog entries are locally cached, in milliseconds; 0 disables caching, negative values disable expiration                          |
+| metrics-reporter-impl | org.apache.iceberg.metrics.LoggingMetricsReporter | Custom `MetricsReporter` implementation to use in a catalog. See the [Metrics reporting](metrics-reporting.md) section for additional details |
 
 `HadoopCatalog` and `HiveCatalog` can access the properties in their constructors.
 Any other custom catalog can access the properties by implementing `Catalog.initialize(catalogName, catalogProperties)`.
diff --git a/1.4.1/docs/spark-configuration.md b/1.4.1/docs/spark-configuration.md
index ef392f4..e8e9182 100644
--- a/1.4.1/docs/spark-configuration.md
+++ b/1.4.1/docs/spark-configuration.md
@@ -178,19 +178,19 @@
     .insertInto("catalog.db.table")
 ```
 
-| Spark option           | Default                    | Description                                                  |
-| ---------------------- | -------------------------- | ------------------------------------------------------------ |
-| write-format           | Table write.format.default | File format to use for this write operation; parquet, avro, or orc |
-| target-file-size-bytes | As per table property      | Overrides this table's write.target-file-size-bytes          |
-| check-nullability      | true                       | Sets the nullable check on fields                            |
-| snapshot-property._custom-key_    | null            | Adds an entry with custom-key and corresponding value in the snapshot summary (the `snapshot-property.` prefix is only required for DSv2)  |
-| fanout-enabled       | false        | Overrides this table's write.spark.fanout.enabled  |
-| check-ordering       | true        | Checks if input schema and table schema are same  |
-| isolation-level | null | Desired isolation level for Dataframe overwrite operations.  `null` => no checks (for idempotent writes), `serializable` => check for concurrent inserts or deletes in destination partitions, `snapshot` => checks for concurrent deletes in destination partitions. |
-| validate-from-snapshot-id | null | If isolation level is set, id of base snapshot from which to check concurrent write conflicts into a table. Should be the snapshot before any reads from the table. Can be obtained via [Table API](../../api#table-metadata) or [Snapshots table](../spark-queries#snapshots). If null, the table's oldest known snapshot is used. |
-| compression-codec      | Table write.(fileformat).compression-codec | Overrides this table's compression codec for this write      |
-| compression-level      | Table write.(fileformat).compression-level | Overrides this table's compression level for Parquet and Avro tables for this write |
-| compression-strategy   | Table write.orc.compression-strategy       | Overrides this table's compression strategy for ORC tables for this write |
+| Spark option           | Default                    | Description                                                                                                                                                                                                                                                                                                                      |
+| ---------------------- | -------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| write-format           | Table write.format.default | File format to use for this write operation; parquet, avro, or orc                                                                                                                                                                                                                                                               |
+| target-file-size-bytes | As per table property      | Overrides this table's write.target-file-size-bytes                                                                                                                                                                                                                                                                              |
+| check-nullability      | true                       | Sets the nullable check on fields                                                                                                                                                                                                                                                                                                |
+| snapshot-property._custom-key_    | null            | Adds an entry with custom-key and corresponding value in the snapshot summary (the `snapshot-property.` prefix is only required for DSv2)                                                                                                                                                                                        |
+| fanout-enabled       | false        | Overrides this table's write.spark.fanout.enabled                                                                                                                                                                                                                                                                                |
+| check-ordering       | true        | Checks if input schema and table schema are same                                                                                                                                                                                                                                                                                 |
+| isolation-level | null | Desired isolation level for Dataframe overwrite operations.  `null` => no checks (for idempotent writes), `serializable` => check for concurrent inserts or deletes in destination partitions, `snapshot` => checks for concurrent deletes in destination partitions.                                                            |
+| validate-from-snapshot-id | null | If isolation level is set, id of base snapshot from which to check concurrent write conflicts into a table. Should be the snapshot before any reads from the table. Can be obtained via [Table API](api.md#table-metadata) or [Snapshots table](spark-queries.md#snapshots). If null, the table's oldest known snapshot is used. |
+| compression-codec      | Table write.(fileformat).compression-codec | Overrides this table's compression codec for this write                                                                                                                                                                                                                                                                          |
+| compression-level      | Table write.(fileformat).compression-level | Overrides this table's compression level for Parquet and Avro tables for this write                                                                                                                                                                                                                                              |
+| compression-strategy   | Table write.orc.compression-strategy       | Overrides this table's compression strategy for ORC tables for this write                                                                                                                                                                                                                                                        |
 
 CommitMetadata provides an interface to add custom metadata to a snapshot summary during a SQL execution, which can be beneficial for purposes such as auditing or change tracking. If properties start with `snapshot-property.`, then that prefix will be removed from each property. Here is an example:
 
diff --git a/1.4.2/docs/configuration.md b/1.4.2/docs/configuration.md
index 1533c24..e555796 100644
--- a/1.4.2/docs/configuration.md
+++ b/1.4.2/docs/configuration.md
@@ -124,16 +124,16 @@
 
 Iceberg catalogs support using catalog properties to configure catalog behaviors. Here is a list of commonly used catalog properties:
 
-| Property                          | Default            | Description                                            |
-| --------------------------------- | ------------------ | ------------------------------------------------------ |
-| catalog-impl                      | null               | a custom `Catalog` implementation to use by an engine  |
-| io-impl                           | null               | a custom `FileIO` implementation to use in a catalog   |
-| warehouse                         | null               | the root path of the data warehouse                    |
-| uri                               | null               | a URI string, such as Hive metastore URI               |
-| clients                           | 2                  | client pool size                                       |
-| cache-enabled                     | true               | Whether to cache catalog entries |
-| cache.expiration-interval-ms      | 30000              | How long catalog entries are locally cached, in milliseconds; 0 disables caching, negative values disable expiration |
-| metrics-reporter-impl | org.apache.iceberg.metrics.LoggingMetricsReporter | Custom `MetricsReporter` implementation to use in a catalog. See the [Metrics reporting](metrics-reporting) section for additional details |
+| Property                          | Default            | Description                                                                                                                                   |
+| --------------------------------- | ------------------ |-----------------------------------------------------------------------------------------------------------------------------------------------|
+| catalog-impl                      | null               | a custom `Catalog` implementation to use by an engine                                                                                         |
+| io-impl                           | null               | a custom `FileIO` implementation to use in a catalog                                                                                          |
+| warehouse                         | null               | the root path of the data warehouse                                                                                                           |
+| uri                               | null               | a URI string, such as Hive metastore URI                                                                                                      |
+| clients                           | 2                  | client pool size                                                                                                                              |
+| cache-enabled                     | true               | Whether to cache catalog entries                                                                                                              |
+| cache.expiration-interval-ms      | 30000              | How long catalog entries are locally cached, in milliseconds; 0 disables caching, negative values disable expiration                          |
+| metrics-reporter-impl | org.apache.iceberg.metrics.LoggingMetricsReporter | Custom `MetricsReporter` implementation to use in a catalog. See the [Metrics reporting](metrics-reporting.md) section for additional details |
 
 `HadoopCatalog` and `HiveCatalog` can access the properties in their constructors.
 Any other custom catalog can access the properties by implementing `Catalog.initialize(catalogName, catalogProperties)`.
diff --git a/1.4.2/docs/spark-configuration.md b/1.4.2/docs/spark-configuration.md
index a6f1534..6aaa271 100644
--- a/1.4.2/docs/spark-configuration.md
+++ b/1.4.2/docs/spark-configuration.md
@@ -178,19 +178,19 @@
     .insertInto("catalog.db.table")
 ```
 
-| Spark option           | Default                    | Description                                                  |
-| ---------------------- | -------------------------- | ------------------------------------------------------------ |
-| write-format           | Table write.format.default | File format to use for this write operation; parquet, avro, or orc |
-| target-file-size-bytes | As per table property      | Overrides this table's write.target-file-size-bytes          |
-| check-nullability      | true                       | Sets the nullable check on fields                            |
-| snapshot-property._custom-key_    | null            | Adds an entry with custom-key and corresponding value in the snapshot summary (the `snapshot-property.` prefix is only required for DSv2)  |
-| fanout-enabled       | false        | Overrides this table's write.spark.fanout.enabled  |
-| check-ordering       | true        | Checks if input schema and table schema are same  |
-| isolation-level | null | Desired isolation level for Dataframe overwrite operations.  `null` => no checks (for idempotent writes), `serializable` => check for concurrent inserts or deletes in destination partitions, `snapshot` => checks for concurrent deletes in destination partitions. |
-| validate-from-snapshot-id | null | If isolation level is set, id of base snapshot from which to check concurrent write conflicts into a table. Should be the snapshot before any reads from the table. Can be obtained via [Table API](../../api#table-metadata) or [Snapshots table](../spark-queries#snapshots). If null, the table's oldest known snapshot is used. |
-| compression-codec      | Table write.(fileformat).compression-codec | Overrides this table's compression codec for this write      |
-| compression-level      | Table write.(fileformat).compression-level | Overrides this table's compression level for Parquet and Avro tables for this write |
-| compression-strategy   | Table write.orc.compression-strategy       | Overrides this table's compression strategy for ORC tables for this write |
+| Spark option           | Default                    | Description                                                                                                                                                                                                                                                                                                                      |
+| ---------------------- | -------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| write-format           | Table write.format.default | File format to use for this write operation; parquet, avro, or orc                                                                                                                                                                                                                                                               |
+| target-file-size-bytes | As per table property      | Overrides this table's write.target-file-size-bytes                                                                                                                                                                                                                                                                              |
+| check-nullability      | true                       | Sets the nullable check on fields                                                                                                                                                                                                                                                                                                |
+| snapshot-property._custom-key_    | null            | Adds an entry with custom-key and corresponding value in the snapshot summary (the `snapshot-property.` prefix is only required for DSv2)                                                                                                                                                                                        |
+| fanout-enabled       | false        | Overrides this table's write.spark.fanout.enabled                                                                                                                                                                                                                                                                                |
+| check-ordering       | true        | Checks if input schema and table schema are same                                                                                                                                                                                                                                                                                 |
+| isolation-level | null | Desired isolation level for Dataframe overwrite operations.  `null` => no checks (for idempotent writes), `serializable` => check for concurrent inserts or deletes in destination partitions, `snapshot` => checks for concurrent deletes in destination partitions.                                                            |
+| validate-from-snapshot-id | null | If isolation level is set, id of base snapshot from which to check concurrent write conflicts into a table. Should be the snapshot before any reads from the table. Can be obtained via [Table API](api.md#table-metadata) or [Snapshots table](spark-queries.md#snapshots). If null, the table's oldest known snapshot is used. |
+| compression-codec      | Table write.(fileformat).compression-codec | Overrides this table's compression codec for this write                                                                                                                                                                                                                                                                          |
+| compression-level      | Table write.(fileformat).compression-level | Overrides this table's compression level for Parquet and Avro tables for this write                                                                                                                                                                                                                                              |
+| compression-strategy   | Table write.orc.compression-strategy       | Overrides this table's compression strategy for ORC tables for this write                                                                                                                                                                                                                                                        |
 
 CommitMetadata provides an interface to add custom metadata to a snapshot summary during a SQL execution, which can be beneficial for purposes such as auditing or change tracking. If properties start with `snapshot-property.`, then that prefix will be removed from each property. Here is an example:
 
diff --git a/1.4.3/docs/configuration.md b/1.4.3/docs/configuration.md
index 1533c24..e555796 100644
--- a/1.4.3/docs/configuration.md
+++ b/1.4.3/docs/configuration.md
@@ -124,16 +124,16 @@
 
 Iceberg catalogs support using catalog properties to configure catalog behaviors. Here is a list of commonly used catalog properties:
 
-| Property                          | Default            | Description                                            |
-| --------------------------------- | ------------------ | ------------------------------------------------------ |
-| catalog-impl                      | null               | a custom `Catalog` implementation to use by an engine  |
-| io-impl                           | null               | a custom `FileIO` implementation to use in a catalog   |
-| warehouse                         | null               | the root path of the data warehouse                    |
-| uri                               | null               | a URI string, such as Hive metastore URI               |
-| clients                           | 2                  | client pool size                                       |
-| cache-enabled                     | true               | Whether to cache catalog entries |
-| cache.expiration-interval-ms      | 30000              | How long catalog entries are locally cached, in milliseconds; 0 disables caching, negative values disable expiration |
-| metrics-reporter-impl | org.apache.iceberg.metrics.LoggingMetricsReporter | Custom `MetricsReporter` implementation to use in a catalog. See the [Metrics reporting](metrics-reporting) section for additional details |
+| Property                          | Default            | Description                                                                                                                                   |
+| --------------------------------- | ------------------ |-----------------------------------------------------------------------------------------------------------------------------------------------|
+| catalog-impl                      | null               | a custom `Catalog` implementation to use by an engine                                                                                         |
+| io-impl                           | null               | a custom `FileIO` implementation to use in a catalog                                                                                          |
+| warehouse                         | null               | the root path of the data warehouse                                                                                                           |
+| uri                               | null               | a URI string, such as Hive metastore URI                                                                                                      |
+| clients                           | 2                  | client pool size                                                                                                                              |
+| cache-enabled                     | true               | Whether to cache catalog entries                                                                                                              |
+| cache.expiration-interval-ms      | 30000              | How long catalog entries are locally cached, in milliseconds; 0 disables caching, negative values disable expiration                          |
+| metrics-reporter-impl | org.apache.iceberg.metrics.LoggingMetricsReporter | Custom `MetricsReporter` implementation to use in a catalog. See the [Metrics reporting](metrics-reporting.md) section for additional details |
 
 `HadoopCatalog` and `HiveCatalog` can access the properties in their constructors.
 Any other custom catalog can access the properties by implementing `Catalog.initialize(catalogName, catalogProperties)`.
diff --git a/1.4.3/docs/spark-configuration.md b/1.4.3/docs/spark-configuration.md
index a6f1534..6aaa271 100644
--- a/1.4.3/docs/spark-configuration.md
+++ b/1.4.3/docs/spark-configuration.md
@@ -178,19 +178,19 @@
     .insertInto("catalog.db.table")
 ```
 
-| Spark option           | Default                    | Description                                                  |
-| ---------------------- | -------------------------- | ------------------------------------------------------------ |
-| write-format           | Table write.format.default | File format to use for this write operation; parquet, avro, or orc |
-| target-file-size-bytes | As per table property      | Overrides this table's write.target-file-size-bytes          |
-| check-nullability      | true                       | Sets the nullable check on fields                            |
-| snapshot-property._custom-key_    | null            | Adds an entry with custom-key and corresponding value in the snapshot summary (the `snapshot-property.` prefix is only required for DSv2)  |
-| fanout-enabled       | false        | Overrides this table's write.spark.fanout.enabled  |
-| check-ordering       | true        | Checks if input schema and table schema are same  |
-| isolation-level | null | Desired isolation level for Dataframe overwrite operations.  `null` => no checks (for idempotent writes), `serializable` => check for concurrent inserts or deletes in destination partitions, `snapshot` => checks for concurrent deletes in destination partitions. |
-| validate-from-snapshot-id | null | If isolation level is set, id of base snapshot from which to check concurrent write conflicts into a table. Should be the snapshot before any reads from the table. Can be obtained via [Table API](../../api#table-metadata) or [Snapshots table](../spark-queries#snapshots). If null, the table's oldest known snapshot is used. |
-| compression-codec      | Table write.(fileformat).compression-codec | Overrides this table's compression codec for this write      |
-| compression-level      | Table write.(fileformat).compression-level | Overrides this table's compression level for Parquet and Avro tables for this write |
-| compression-strategy   | Table write.orc.compression-strategy       | Overrides this table's compression strategy for ORC tables for this write |
+| Spark option           | Default                    | Description                                                                                                                                                                                                                                                                                                                      |
+| ---------------------- | -------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| write-format           | Table write.format.default | File format to use for this write operation; parquet, avro, or orc                                                                                                                                                                                                                                                               |
+| target-file-size-bytes | As per table property      | Overrides this table's write.target-file-size-bytes                                                                                                                                                                                                                                                                              |
+| check-nullability      | true                       | Sets the nullable check on fields                                                                                                                                                                                                                                                                                                |
+| snapshot-property._custom-key_    | null            | Adds an entry with custom-key and corresponding value in the snapshot summary (the `snapshot-property.` prefix is only required for DSv2)                                                                                                                                                                                        |
+| fanout-enabled       | false        | Overrides this table's write.spark.fanout.enabled                                                                                                                                                                                                                                                                                |
+| check-ordering       | true        | Checks if input schema and table schema are same                                                                                                                                                                                                                                                                                 |
+| isolation-level | null | Desired isolation level for Dataframe overwrite operations.  `null` => no checks (for idempotent writes), `serializable` => check for concurrent inserts or deletes in destination partitions, `snapshot` => checks for concurrent deletes in destination partitions.                                                            |
+| validate-from-snapshot-id | null | If isolation level is set, id of base snapshot from which to check concurrent write conflicts into a table. Should be the snapshot before any reads from the table. Can be obtained via [Table API](api.md#table-metadata) or [Snapshots table](spark-queries.md#snapshots). If null, the table's oldest known snapshot is used. |
+| compression-codec      | Table write.(fileformat).compression-codec | Overrides this table's compression codec for this write                                                                                                                                                                                                                                                                          |
+| compression-level      | Table write.(fileformat).compression-level | Overrides this table's compression level for Parquet and Avro tables for this write                                                                                                                                                                                                                                              |
+| compression-strategy   | Table write.orc.compression-strategy       | Overrides this table's compression strategy for ORC tables for this write                                                                                                                                                                                                                                                        |
 
 CommitMetadata provides an interface to add custom metadata to a snapshot summary during a SQL execution, which can be beneficial for purposes such as auditing or change tracking. If properties start with `snapshot-property.`, then that prefix will be removed from each property. Here is an example: