Cluster Deployment

This guide describes how to manually deploy a cluster instance consisting of 3 ConfigNodes and 3 DataNodes (commonly referred to as a 3C3D cluster).

1. Prerequisites

  1. System configuration:Ensure the system has been configured according to the preparation guidelines.

  2. IP Configuration: It is recommended to use hostnames for IP configuration to prevent issues caused by IP address changes. Configure the /etc/hosts file on each server. For example, if the local IP is 11.101.17.224 and the hostname is iotdb-1, use the following command to set the hostname:

    echo "192.168.1.3  iotdb-1" >> /etc/hosts 
    

    Use the hostname for cn_internal_address and dn_internal_address in IoTDB configuration.

  3. Unmodifiable Parameters: Some parameters cannot be changed after the first startup. Refer to the Parameter Configuration section.

  4. Installation Path: Ensure the installation path contains no spaces or non-ASCII characters to prevent runtime issues.

  5. User Permissions: Choose one of the following permissions during installation and deployment:

    • Root User (Recommended): This avoids permission-related issues.
    • Non-Root User:
      • Use the same user for all operations, including starting, activating, and stopping services.
      • Avoid using sudo, which can cause permission conflicts.
  6. Monitoring Panel: Deploy a monitoring panel to track key performance metrics. Contact the Timecho team for access and refer to the “Monitoring Panel Deployment” guide.

2. Preparation

  1. Obtain the TimechoDB installation package: timechodb-{version}-bin.zip following IoTDB-Package

  2. Configure the operating system environment according to Environment Requirement

2.1 Pre-installation Check

To ensure the IoTDB Enterprise Edition installation package you obtained is complete and authentic, we recommend performing an SHA512 verification before proceeding with the installation and deployment.

Preparation:

  • Obtain the officially released SHA512 checksum: Find the “SHA512 Checksum” corresponding to each version in the Release History document.

Verification Steps (Linux as an Example):

  1. Open the terminal and navigate to the directory where the installation package is stored (e.g., /data/iotdb):
       cd /data/iotdb
    
  2. Execute the following command to calculate the hash value:
       sha512sum timechodb-{version}-bin.zip
    
  3. The terminal will output a result (the left part is the SHA512 checksum, and the right part is the file name):

img

  1. Compare the output result with the official SHA512 checksum. Once confirmed that they match, you can proceed with the installation and deployment operations in accordance with the procedures below.

Notes:

  • If the verification results do not match, please contact Timecho Team to re-obtain the installation package.
  • If a “file not found” prompt appears during verification, check whether the file path is correct or if the installation package has been fully downloaded.

3. Installation Steps

Taking a cluster with three Linux servers with the following information as example:

Node IPHost NameService
11.101.17.224iotdb-1ConfigNode、DataNode
11.101.17.225iotdb-2ConfigNode、DataNode
11.101.17.226iotdb-3ConfigNode、DataNode

3.1 Configure Hostnames

On all three servers, configure the hostnames by editing the /etc/hosts file. Use the following commands:

echo "11.101.17.224  iotdb-1"  >> /etc/hosts
echo "11.101.17.225  iotdb-2"  >> /etc/hosts
echo "11.101.17.226  iotdb-3"  >> /etc/hosts 

3.2 Extract Installation Package

Unzip the installation package and enter the installation directory:

unzip  timechodb-{version}-bin.zip
cd  timechodb-{version}-bin

3.3 Parameters Configuration

  • Memory Configuration

    Edit the following files for memory allocation:

    • ConfigNode: ./conf/confignode-env.sh (or .bat for Windows)
    • DataNode: ./conf/datanode-env.sh (or .bat for Windows)
    ParameterDescriptionDefaultRecommendedNotes
    MEMORY_SIZETotal memory allocated to the nodeEmptyAs neededSave changes without immediate execution; modifications take effect after service restart.

General Configuration

Set the following parameters in ./conf/iotdb-system.properties. Refer to ./conf/iotdb-system.properties.template for a complete list.

Cluster-Level Parameters:

ParameterDescription11.101.17.22411.101.17.22511.101.17.226
cluster_nameName of the clusterdefaultClusterdefaultClusterdefaultCluster
schema_replication_factorMetadata replication factor; DataNode count shall not be fewer than this value333
data_replication_factorData replication factor; DataNode count shall not be fewer than this value222

ConfigNode Parameters

ParameterDescriptionDefaultRecommended11.101.17.22411.101.17.22511.101.17.226Notes
cn_internal_addressAddress used for internal communication within the cluster127.0.0.1Server's IPv4 address or hostname. Use hostname to avoid issues when the IP changes.iotdb-1iotdb-2iotdb-3This parameter cannot be modified after the first startup.
cn_internal_portPort used for internal communication within the cluster1071010710107101071010710This parameter cannot be modified after the first startup.
cn_consensus_portPort used for consensus protocol communication among ConfigNode replicas1072010720107201072010720This parameter cannot be modified after the first startup.
cn_seed_config_nodeAddress of the ConfigNode for registering and joining the cluster. (e.g.,cn_internal_address:cn_internal_port)127.0.0.1:10710Address and port of the seed ConfigNode (e.g., cn_internal_address:cn_internal_port)iotdb-1:10710iotdb-1:10710iotdb-1:10710This parameter cannot be modified after the first startup.

DataNode Parameters

ParameterDescriptionDefaultRecommended11.101.17.22411.101.17.22511.101.17.226Notes
dn_rpc_addressAddress for the client RPC service0.0.0.0The IPV4 address or host name of the server where it is located, and it is recommended to use the IPV4 addressiotdb-1iotdb-2iotdb-3Effective after restarting the service.
dn_rpc_portPort for the client RPC service66676667666766676667Effective after restarting the service.
dn_internal_addressAddress used for internal communication within the cluster127.0.0.1Server's IPv4 address or hostname. Use hostname to avoid issues when the IP changes.iotdb-1iotdb-2iotdb-3This parameter cannot be modified after the first startup.
dn_internal_portPort used for internal communication within the cluster1073010730107301073010730This parameter cannot be modified after the first startup.
dn_mpp_data_exchange_portPort used for receiving data streams1074010740107401074010740This parameter cannot be modified after the first startup.
dn_data_region_consensus_portPort used for data replica consensus protocol communication1075010750107501075010750This parameter cannot be modified after the first startup.
dn_schema_region_consensus_portPort used for metadata replica consensus protocol communication1076010760107601076010760This parameter cannot be modified after the first startup.
dn_seed_config_nodeAddress of the ConfigNode for registering and joining the cluster.(e.g.,cn_internal_address:cn_internal_port)127.0.0.1:10710Address of the first ConfigNodeiotdb-1:10710iotdb-1:10710iotdb-1:10710This parameter cannot be modified after the first startup.

Note: Ensure files are saved after editing. Tools like VSCode Remote do not save changes automatically.

3.4 Start ConfigNode Instances

  1. Start the first ConfigNode (iotdb-1) as the seed node
cd sbin
./start-confignode.sh    -d      #"- d" parameter will start in the background
  1. Start the remaining ConfigNodes (iotdb-2 and iotdb-3) in sequence.

    If the startup fails, refer to the Common Questions section below for troubleshooting.

3.5 Start DataNode Instances

On each server, navigate to the sbin directory and start the DataNode:

cd sbin
./start-datanode.sh   -d   #"- d" parameter will start in the background

3.6 Activate Database

Option 1: Command-Based Activation

  1. Enter the IoTDB CLI on any node of the cluster:
  • For Tree Model:

    # For Linux or macOS
    ./start-cli.sh
    
    # For Windows
    ./start-cli.bat
    
  1. Run the following command to retrieve the machine code required for activation:

    show system info
    
  2. Copy the returned machine codes of all nodes in the cluster (displayed as a green string) and send it to the Timecho team:

    +--------------------------------------------------------------+
    |                                                    SystemInfo|
    +--------------------------------------------------------------+
    |01-TE5NLES4-UDDWCMYE,01-GG5NLES4-XXDWCMYE,01-FF5NLES4-WWWWCMYE|
    +--------------------------------------------------------------+
    Total line number = 1
    It costs 0.030s
    
  3. Enter the activation codes provided by the Timecho team in the CLI in sequence using the following format. Wrap the activation code in single quotes ('):

    IoTDB> activate '01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA===,01-D4EYQGPZ-EAUJJODW-NUKRDR6F-TUQS3B75-EDZFLK3A-6BOKJFFZ-ALDHOMN7-NB2E4BHI-7ZKGFVK6-GCIFXA4T-UG3XJTTD-SHJV6F2P-Q27B4OMJ-R47ZDIM3-UUASUXG2-OQXGVZCO-MMYKICZU-TWFQYYAO-ZOAGOKJA-NYHQTA5U-EWAR4EP5-MRC6R2CI-PKUTKRCT-7UDGRH3F-7BYV4P5D-6KKIA==='
    

Option 2: File-Based Activation

  1. Start all ConfigNodes and DataNodes.
  2. Copy the system_info file from the activation directory on each server and send them to the Timecho team.
  3. Place the license files provided by the Timecho team into the corresponding activation folder for each node.

3.7 Verify Activation

In the CLI, you can check the activation status by running the show activation command; the example below shows a status of ACTIVATED, indicating successful activation.

IoTDB> show activation
+---------------+---------+-----------------------------+
|    LicenseInfo|    Usage|                        Limit|
+---------------+---------+-----------------------------+
|         Status|ACTIVATED|                            -|
|    ExpiredTime|        -|2026-04-30T00:00:00.000+08:00|
|  DataNodeLimit|        1|                    Unlimited|
|       CpuLimit|       16|                    Unlimited|
|    DeviceLimit|       30|                    Unlimited|
|TimeSeriesLimit|       72|                1,000,000,000|
+---------------+---------+-----------------------------+

3.8 One-click Cluster Start and Stop

3.8.1 Overview

Within the root directory of IoTDB, the sbin subdirectory houses the start-all.sh and stop-all.sh scripts, which work in concert with the iotdb-cluster.properties configuration file located in the conf subdirectory. This synergy enables the one-click initiation or termination of all nodes within the cluster from a single node. This approach facilitates efficient management of the IoTDB cluster's lifecycle, streamlining the deployment and operational maintenance processes.

This following section will introduce the specific configuration items in the iotdb-cluster.properties file.

3.8.2 Configuration Items

Note:

  • When the cluster changes, this configuration file needs to be manually updated.
  • If the iotdb-cluster.properties configuration file is not set up and the start-all.sh or stop-all.sh scripts are executed, the scripts will, by default, start or stop the ConfigNode and DataNode nodes located in the IOTDB_HOME directory where the scripts reside.
  • It is recommended to configure SSH passwordless login: If not configured, the script will prompt for the server password after execution to facilitate subsequent start, stop, or destroy operations. If already configured, there is no need to enter the server password during script execution.
  • confignode_address_list
Nameconfignode_address_list
DescriptionA list of IP addresses of the hosts where the ConfigNodes to be started/stopped are located. If there are multiple, they should be separated by commas.
TypeString
DefaultNone
EffectiveAfter restarting the system
  • datanode_address_list
Namedatanode_address_list
DescriptionA list of IP addresses of the hosts where the DataNodes to be started/stopped are located. If there are multiple, they should be separated by commas.
TypeString
DefaultNone
EffectiveAfter restarting the system
  • ssh_account
Namessh_account
DescriptionThe username used to log in to the target hosts via SSH. All hosts must have the same username.
TypeString
Defaultroot
EffectiveAfter restarting the system
  • ssh_port
Namessh_port
DescriptionThe SSH port exposed by the target hosts. All hosts must have the same SSH port.
Typeint
Default22
EffectiveAfter restarting the system
  • confignode_deploy_path
Nameconfignode_deploy_path
DescriptionThe path on the target hosts where all ConfigNodes to be started/stopped are located. All ConfigNodes must be in the same directory on their respective hosts.
TypeString
DefaultNone
EffectiveAfter restarting the system
  • datanode_deploy_path
Namedatanode_deploy_path
DescriptionThe path on the target hosts where all DataNodes to be started/stopped are located. All DataNodes must be in the same directory on their respective hosts.
TypeString
DefaultNone
EffectiveAfter restarting the system

4. Maintenance

4.1 ConfigNode Maintenance

ConfigNode maintenance includes adding and removing ConfigNodes. Common use cases include:

  • Cluster Expansion: If the cluster contains only 1 ConfigNode, adding 2 more ConfigNodes enhances high availability, resulting in a total of 3 ConfigNodes.
  • Cluster Fault Recovery: If a ConfigNode's machine fails and it cannot function normally, remove the faulty ConfigNode and add a new one to the cluster.

Note: After completing ConfigNode maintenance, ensure that the cluster contains either 1 or 3 active ConfigNodes. Two ConfigNodes do not provide high availability, and more than three ConfigNodes can degrade performance.

Adding a ConfigNode

Linux / MacOS:

sbin/start-confignode.sh

Windows:

# Before version V2.0.4.x
sbin\start-confignode.bat

# V2.0.4.x and later versions
sbin\windows\start-confignode.bat

Removing a ConfigNode

  1. Connect to the cluster using the CLI and confirm the internal address and port of the ConfigNode to be removed:

    show confignodes;
    

Example output:

IoTDB> show confignodes
+------+-------+---------------+------------+--------+
|NodeID| Status|InternalAddress|InternalPort|    Role|
+------+-------+---------------+------------+--------+
|     0|Running|      127.0.0.1|       10710|  Leader|
|     1|Running|      127.0.0.1|       10711|Follower|
|     2|Running|      127.0.0.1|       10712|Follower|
+------+-------+---------------+------------+--------+
Total line number = 3
It costs 0.030s
  1. Remove the ConfigNode using the script:

Linux / MacOS:

sbin/remove-confignode.sh [confignode_id]
# Or:
sbin/remove-confignode.sh [cn_internal_address:cn_internal_port]

Windows:

# Before version V2.0.4.x
sbin\remove-confignode.bat [confignode_id]
# Or:
sbin\remove-confignode.bat [cn_internal_address:cn_internal_port]

# V2.0.4.x and later versions
sbin\windows\remove-confignode.bat [confignode_id]
# Or:
sbin\windows\remove-confignode.bat [cn_internal_address:cn_internal_port]

4.2 DataNode Maintenance

DataNode maintenance includes adding and removing DataNodes. Common use cases include:

  • Cluster Expansion: Add new DataNodes to increase cluster capacity.
  • Cluster Fault Recovery: If a DataNode's machine fails and it cannot function normally, remove the faulty DataNode and add a new one to the cluster.

Note: During and after DataNode maintenance, ensure that the number of active DataNodes is not fewer than the data replication factor (usually 2) or the schema replication factor (usually 3).

Adding a DataNode

Linux / MacOS:

sbin/start-datanode.sh

Windows:

# Before version V2.0.4.x
sbin\start-datanode.bat

# V2.0.4.x and later versions 
sbin\windows\start-datanode.bat

Note: After adding a DataNode, the cluster load will gradually balance across all nodes as new writes arrive and old data expires (if TTL is set).

Removing a DataNode

  1. Connect to the cluster using the CLI and confirm the RPC address and port of the DataNode to be removed:
show datanodes;

Example output:

IoTDB> show datanodes
+------+-------+----------+-------+-------------+---------------+
|NodeID| Status|RpcAddress|RpcPort|DataRegionNum|SchemaRegionNum|
+------+-------+----------+-------+-------------+---------------+
|     1|Running|   0.0.0.0|   6667|            0|              0|
|     2|Running|   0.0.0.0|   6668|            1|              1|
|     3|Running|   0.0.0.0|   6669|            1|              0|
+------+-------+----------+-------+-------------+---------------+
Total line number = 3
It costs 0.110s
  1. Remove the DataNode using the script:

Linux / MacOS:

sbin/remove-datanode.sh [dn_rpc_address:dn_rpc_port]

Windows:

# Before version V2.0.4.x
sbin\remove-datanode.bat [dn_rpc_address:dn_rpc_port]

# V2.0.4.x and later versions
sbin\windows\remove-datanode.bat [dn_rpc_address:dn_rpc_port]

5. Common Questions

  1. Activation Fails Repeatedly
    • Use the ls -al command to verify that the ownership of the installation directory matches the current user.
    • Check the ownership of all files in the ./activation directory to ensure they belong to the current user.
  2. ConfigNode Fails to Start
    • Review the startup logs to check if any parameters, which cannot be modified after the first startup, were changed.

    • Check the logs for any other errors. If unresolved, contact technical support for assistance.

    • If the deployment is fresh or data can be discarded, clean the environment and redeploy using the following steps: Clean the Environment

    • Stop all ConfigNode and DataNode processes:

       sbin/stop-standalone.sh
      
    • Check for any remaining processes:

       jps 
       # or 
       ps -ef | grep iotdb
      
    • If processes remain, terminate them manually:

        kill -9 <pid>
      
        #For systems with a single IoTDB instance, you can clean up residual processes with:
        ps -ef | grep iotdb | grep -v grep | tr -s ' ' ' ' | cut -d ' ' -f2 | xargs kill -9
      
    • Delete the data and logs directories:

      cd /data/iotdb 
      rm -rf data logs
      

6. Appendix

6.1 ConfigNode Parameters

ParameterDescriptionIs it required
-dStarts the process in daemon mode (runs in the background).No

6.2 DataNode Parameters

ParameterDescriptionRequired
-vDisplays version information.No
-fRuns the script in the foreground without backgrounding it.No
-dStarts the process in daemon mode (runs in the background).No
-pSpecifies a file to store the process ID for process management.No
-cSpecifies the path to the configuration folder; the script loads configuration files from this location.No
-gPrints detailed garbage collection (GC) information.No
-HSpecifies the path for the Java heap dump file, used during JVM memory overflow.No
-ESpecifies the file for JVM error logs.No
-DDefines system properties in the format key=value.No
-XPasses -XX options directly to the JVM.No
-hDisplays the help instructions.No