To read a Onetable synced Iceberg table from BigQuery, you have two options:
Onetable outputs metadata files for Iceberg target format syncs which can be used by BigQuery to read the BigLake tables.
CREATE EXTERNAL TABLE onetable_synced_iceberg_table WITH CONNECTION `myproject.mylocation.myconnection` OPTIONS ( format = 'ICEBERG', uris = ["gs://mybucket/mydata/mytable/metadata/iceberg.metadata.json"] )
:::danger Note: This method requires you to keep the table updated when there are table updates and hence Google does not recommend this approach. :::
:::danger Important: For Hudi source format to Iceberg target format use cases The Hudi extensions provide the ability to add field IDs to the parquet schema when writing with Hudi. This is a requirement for some engines, like BigQuery and Snowflake, when reading an Iceberg table. If you are not planning on using Iceberg, then you do not need to add these to your Hudi writers. :::
hudi-extensions-0.1.0-SNAPSHOT-bundled.jar) to your class path--jars hudi-extensions-0.1.0-SNAPSHOT-bundled.jar to the end of the command.hoodie.avro.write.support.class: io.onetable.hudi.extensions.HoodieAvroWriteSupportWithFieldIdshoodie.client.init.callback.classes: io.onetable.hudi.extensions.AddFieldIdsClientInitCallbackYou can use two options to register Onetable synced Iceberg tables to BigLake Metastore:
This document explains how to query Hudi and Delta table formats through the use of manifest files.