If you have been running an evaluation Druid cluster using the built-in Derby metadata storage and wish to migrate to a more production-capable metadata store such as MySQL or PostgreSQL, this document describes the necessary steps.
To ensure a clean migration, shut down the non-coordinator services to ensure that metadata state will not change as you do the migration.
When migrating from Derby, the coordinator processes will still need to be up initially, as they host the Derby database.
Druid provides an Export Metadata Tool for exporting metadata from Derby into CSV files which can then be imported into your new metadata store.
The tool also provides options for rewriting the deep storage locations of segments; this is useful for deep storage migration.
Run the export-metadata
tool on your existing cluster, and save the CSV files it generates. After a successful export, you can shut down the coordinator.
Before importing the existing cluster metadata, you will need to set up the new metadata store.
The MySQL extension and PostgreSQL extension docs have instructions for initial database setup.
Update your Druid runtime properties with the new metadata configuration.
If you have set druid.metadata.storage.connector.createTables
to true
(which is the default), and your metadata connect user has DDL privileges, you can disregard this section as Druid will create metadata tables automatically on start up.
Druid provides a metadata-init
tool for creating Druid's metadata tables. After initializing the Druid database, you can run the commands shown below from the root of the Druid package to initialize the tables.
In the example commands below:
lib
is the Druid lib directoryextensions
is the Druid extensions directorybase
corresponds to the value of druid.metadata.storage.tables.base
in the configuration, druid
by default.--connectURI
parameter corresponds to the value of druid.metadata.storage.connector.connectURI
.--user
parameter corresponds to the value of druid.metadata.storage.connector.user
.--password
parameter corresponds to the value of druid.metadata.storage.connector.password
.cd ${DRUID_ROOT} java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList="[\"mysql-metadata-storage\"]" -Ddruid.metadata.storage.type=mysql -Ddruid.node.type=metadata-init org.apache.druid.cli.Main tools metadata-init --connectURI="<mysql-uri>" --user <user> --password <pass> --base druid
cd ${DRUID_ROOT} java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList="[\"postgresql-metadata-storage\"]" -Ddruid.metadata.storage.type=postgresql -Ddruid.node.type=metadata-init org.apache.druid.cli.Main tools metadata-init --connectURI="<postgresql-uri>" --user <user> --password <pass> --base druid
The same command as above can be used to update Druid metadata tables to the latest version. If any table already exists, it is not created again but any ALTER statements that may be required are still executed.
After initializing the tables, please refer to the import commands for your target database.
After importing the metadata successfully, you can now restart your cluster.