JIRA: https://issues.apache.org/jira/browse/HUDI-3625
As you scale your Apache Hudi workloads over cloud object stores like Amazon S3, there is potential of hitting request throttling limits which in-turn impacts performance. In this RFC, we are proposing to support an alternate storage layout that is optimized for Amazon S3 and other cloud object stores, which helps achieve maximum throughput and significantly reduce throttling.
In addition, we are proposing an interface that would allow users to implement their own custom strategy to allow them to distribute the data files across cloud stores, hdfs or on prem based on their specific use-cases.
Apache Hudi follows the traditional Hive storage layout while writing files on storage:
While this storage layout scales well for HDFS, it increases the probability of hitting request throttle limits when working with cloud object stores like Amazon S3 and others. This is because Amazon S3 and other cloud stores throttle requests based on object prefix. Amazon S3 does scale based on request patterns for different prefixes and adds internal partitions (with their own request limits), but there can be a 30 - 60 minute wait time before new partitions are created. Thus, all files/objects stored under the same table path prefix could result in these request limits being hit for the table prefix, specially as workloads scale, and there are several thousands of files being written/updated concurrently. This hurts performance due to re-trying of failed requests affecting throughput, and result in occasional failures if the retries are not able to succeed either and continue to be throttled.
The traditional storage layout also tightly couples the partitions as folders under the table path. However, some users want flexibility to be able to distribute files/partitions under multiple different paths across cloud stores, hdfs etc. based on their specific needs. For example, customers have use cases to distribute files for each partition under a separate S3 bucket with its individual encryption key. It is not possible to implement such use-cases with Hudi currently.
The high level proposal here is to introduce object store storage strategy, where all files are distributed evenly across multiple randomly generated prefixes under the Amazon S3 bucket, instead of being stored under a common table path/prefix. This would help distribute the requests evenly across different prefixes, resulting in Amazon S3 creating partitions for the prefixes each with its own request limit. This significantly reduces the possibility of hitting the request limit for a specific prefix/partition.
In addition, we want to expose an interface that provides users the flexibility to implement their own strategy for distributing files if using the traditional Hive storage strategy or federated storage strategy (proposed in this RFC) does not meet their use-case.
/** * Interface for providing storage file locations. */ public interface HoodieStorageStrategy extends Serializable { /** * Return a storage location for the given filename. * * @param fileId data file ID * @return a storage location string for a data file */ String storageLocation(String fileId); /** * Return a storage location for the given partition and filename. * * @param partitionPath partition path for the file * @param fileId data file ID * @return a storage location string for a data file */ String storageLocation(String partitionPath, String fileId); }
We want to distribute files evenly across multiple random prefixes, instead of following the traditional Hive storage layout of keeping them under a common table path/prefix. In addition to the Table Path
, for this new layout user will configure another Table Storage Path
under which the actual data files will be distributed. The original Table Path
will be used to maintain the table/partitions Hudi metadata.
For the purpose of this documentation lets assume:
Table Path => s3://<table_bucket>/<hudi_table_name>/ Table Storage Path => s3://<table_storage_bucket>/
Note: Table Storage Path
can be a path in the same Amazon S3 bucket or a different bucket. For best results, Table Storage Path
should be a top-level bucket instead of a prefix under the bucket to avoid multiple tables sharing the prefix.
We will use a Hashing function on the Partition Path/File ID
to map them to a prefix generated under Table Storage Path
:
// Hashing on the file ID makes sure that base file and its log files fall under the same folder s3://<table_storage_bucket>/<hash_prefix>/..
In addition, under the hash prefix we will follow a folder structure by appending Hudi Table Name and Partition. This folder structuring would be useful if we ever have to do a file system listing to re-create the metadata file list for the table (discussed more in the next section). Here is how the final layout would look like for partitioned
tables:
s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=usa/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet s3://<table_storage_bucket>/01f50736/<hudi_table_name>/country=india/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26 ... s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=uk/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=india/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26 s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/country=usa/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26 ...
For non-partitioned
tables, this is how it would look:
s3://<table_storage_bucket>/01f50736/<hudi_table_name>/075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.parquet s3://<table_storage_bucket>/01f50736/<hudi_table_name>/4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.parquet s3://<table_storage_bucket>/01f50736/<hudi_table_name>/.9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.log.1_0-22-26 ... s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/9320889c-8537-4aa7-a63e-ef088b9a21ce-0_9-11-51_20220301005056692.parquet s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/a62aa56b-d55e-4a2b-88a6-d603ef26775c-0_8-11-50_20220301005056692.parquet s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.4b0c6b40-2ac0-4a1c-a26f-6338aa4db22e-0_6-11-48_20220301005056692.log.1_0-22-26 s3://<table_storage_bucket>/0bfb3d6e/<hudi_table_name>/.075f3295-def8-4a42-a927-07fd2dd2976c-0_7-11-49_20220301005056692.log.1_0-22-26 ...
Note: Storage strategy would only return a storage location instead of a full path. In the above example, the storage location is s3://<table_storage_bucket>/0bfb3d6e/
, and the lower-level folder structure would be appended later automatically to get the actual file path. In another word, users would only be able to customize upper-level folder structure (storage location). Having a fixed lower-level folder structure would be beneficial because:
Storage strategy would be persisted in the table config (.hoodie/hoodie.properties
), and the strategy for metadata table would be always set to default. So the original table path will continue to store the metadata folder
and partition metadata
files:
s3://<table_bucket>/<hudi_table_name>/.hoodie/... s3://<table_bucket>/<hudi_table_name>/country=usa/.hoodie_partition_metadata s3://<table_bucket>/<hudi_table_name>/country=india/.hoodie_partition_metadata s3://<table_bucket>/<hudi_table_name>/country=uk/.hoodie_partition_metadata ...
We can re-use the implementations of HashID
class to generate hashes on File ID
or Partition + File ID
, which uses XX hash function with 32/64 bits (known for being fast).
The hashing function should be made user configurable for use cases like bucketing or dynamic sub-partitioning/re-hash to reduce the number of hash prefixes. Having too many unique hash prefixes would make files too dispersed, and affect performance on other operations such as listing.
In RFC-15, we introduced an internal Metadata Table with a files
partition that maintains mapping from partitions to list of files in the partition stored under Table Path
. This mapping is kept up to date, as operations are performed on the original table. We will leverage the same to now maintain mappings to files stored at Table Storage Path
under different prefixes.
Here are some of the design considerations:
Metadata table is not an optional optimization but a pre-requisite for federated storage to work. Since Hudi 0.11 we have enabled metadata table by default and hence this feature can be enabled by the users as long as they are not explicitly turning off metadata table, in which case we should throw an exception.
Existing tables cannot switch storage strategy without being re-bootstrapped with the new strategy.
The Instant metadata (HoodieCommitMetadata
,HoodieCleanMetadata
etc.) will always act as the source of file listing for metadata table to be populated.
If there is an error reading from Metadata table, we will not fall back listing from file system.
In case of metadata table getting corrupted or lost, we need to have a solution here to reconstruct metadata table from the files which distributed using federated storage. We will likely have to implement a file system listing logic, that can get all the partition to files mapping by listing all the prefixes under the Table Storage Path
. Following the folder structure of adding table name/partitions under the prefix will help in getting the listing and identifying the table/partition they belong to.
Spark, Hive, Presto, and Trino are already integrated to use metadata based listing. In general, since these query engines are able to use Hudi's metadata table there should ideally be no changes required in terms of making them work with federated storage. Here are some considerations:
Spark DataSource and Spark SQL queries have been integrated with metadata based listing via the Hudi‘s custom implementation of Spark’s FileIndex interface. However, if Spark DataSource queries are used with globbed paths then the FileIndex path does not kick in, and it would rely on Spark‘s InMemoryFileIndex
to do the file listing with Hudi’s path filter applied. Thus, these Spark DataSource queries with globbed paths would not work with federated storage.
Query engines should be able to determine that federated storage is configured, and rely on metadata to list files. It should not be user's responsibility to enable metadata listing from query engines side.
We need to ensure that partition pruning continues to work for the query engines.