Configuration

Preface

This document explains the DolphinScheduler application configurations according to DolphinScheduler-1.3.x versions.

Directory Structure

Currently, all the configuration files are under [conf ] directory. Check the following simplified DolphinScheduler installation directories to have a direct view about the position of [conf] directory and configuration files it has. This document only describes DolphinScheduler configurations and other topics are not going into.

[Note: the DolphinScheduler (hereinafter called the ‘DS’) .]

├─bin                               DS application commands directory
│  ├─dolphinscheduler-daemon.sh         startup or shutdown DS application 
│  ├─start-all.sh                       startup all DS services with configurations
│  ├─stop-all.sh                        shutdown all DS services with configurations
├─conf                              configurations directory
│  ├─application-api.properties         API-service config properties
│  ├─datasource.properties              datasource config properties
│  ├─zookeeper.properties               ZooKeeper config properties
│  ├─master.properties                  master-service config properties
│  ├─worker.properties                  worker-service config properties
│  ├─quartz.properties                  quartz config properties
│  ├─common.properties                  common-service [storage] config properties
│  ├─alert.properties                   alert-service config properties
│  ├─config                             environment variables config directory
│      ├─install_config.conf                DS environment variables configuration script [install or start DS]
│  ├─env                                load environment variables configs script directory
│      ├─dolphinscheduler_env.sh            load environment variables configs [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
│  ├─org                                mybatis mapper files directory
│  ├─i18n                               i18n configs directory
│  ├─logback-api.xml                    API-service log config
│  ├─logback-master.xml                 master-service log config
│  ├─logback-worker.xml                 worker-service log config
│  ├─logback-alert.xml                  alert-service log config
├─sql                                   .sql files to create or upgrade DS metadata
│  ├─create                             create SQL scripts directory
│  ├─upgrade                            upgrade SQL scripts directory
│  ├─dolphinscheduler_postgre.sql       PostgreSQL database init script
│  ├─dolphinscheduler_mysql.sql         MySQL database init script
│  ├─soft_version                       current DS version-id file
├─script                            DS services deployment, database create or upgrade scripts directory
│  ├─create-dolphinscheduler.sh         DS database init script
│  ├─upgrade-dolphinscheduler.sh        DS database upgrade script
│  ├─monitor-server.sh                  DS monitor-server start script       
│  ├─scp-hosts.sh                       transfer installation files script                                     
│  ├─remove-zk-node.sh                  cleanup ZooKeeper caches script       
├─ui                                front-end web resources directory
├─lib                               DS .jar dependencies directory
├─install.sh                        auto-setup DS services script

Configurations in Details

serial numberservice classificationconfig file
1startup or shutdown DS applicationdolphinscheduler-daemon.sh
2datasource config propertiesdatasource.properties
3ZooKeeper config propertieszookeeper.properties
4common-service[storage] config propertiescommon.properties
5API-service config propertiesapplication-api.properties
6master-service config propertiesmaster.properties
7worker-service config propertiesworker.properties
8alert-service config propertiesalert.properties
9quartz config propertiesquartz.properties
10DS environment variables configuration script[install/start DS]install_config.conf
11load environment variables configs
[eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
dolphinscheduler_env.sh
12services log config filesAPI-service log config : logback-api.xml
master-service log config : logback-master.xml
worker-service log config : logback-worker.xml
alert-service log config : logback-alert.xml

dolphinscheduler-daemon.sh [startup or shutdown DS application]

dolphinscheduler-daemon.sh is responsible for DS startup and shutdown. Essentially, start-all.sh or stop-all.sh startup and shutdown the cluster via dolphinscheduler-daemon.sh. Currently, DS just makes a basic config, remember to config further JVM options based on your practical situation of resources.

Default simplified parameters are:

export DOLPHINSCHEDULER_OPTS="
-server 
-Xmx16g 
-Xms1g 
-Xss512k 
-XX:+UseConcMarkSweepGC 
-XX:+CMSParallelRemarkEnabled 
-XX:+UseFastAccessorMethods 
-XX:+UseCMSInitiatingOccupancyOnly 
-XX:CMSInitiatingOccupancyFraction=70
"

“-XX:DisableExplicitGC” is not recommended due to may lead to memory link (DS dependent on Netty to communicate).

datasource.properties [datasource config properties]

DS uses Druid to manage database connections and default simplified configs are: |Parameters | Default value| Description| |--|--|--| spring.datasource.driver-class-name||datasource driver spring.datasource.url||datasource connection url spring.datasource.username||datasource username spring.datasource.password||datasource password spring.datasource.initialSize|5| initial connection pool size number spring.datasource.minIdle|5| minimum connection pool size number spring.datasource.maxActive|5| maximum connection pool size number spring.datasource.maxWait|60000| max wait milliseconds spring.datasource.timeBetweenEvictionRunsMillis|60000| idle connection check interval spring.datasource.timeBetweenConnectErrorMillis|60000| retry interval spring.datasource.minEvictableIdleTimeMillis|300000| connections over minEvictableIdleTimeMillis will be collect when idle check spring.datasource.validationQuery|SELECT 1| validate connection by running the SQL spring.datasource.validationQueryTimeout|3| validate connection timeout[seconds] spring.datasource.testWhileIdle|true| set whether the pool validates the allocated connection when a new connection request comes spring.datasource.testOnBorrow|true| validity check when the program requests a new connection spring.datasource.testOnReturn|false| validity check when the program recalls a connection spring.datasource.defaultAutoCommit|true| whether auto commit spring.datasource.keepAlive|true| runs validationQuery SQL to avoid the connection closed by pool when the connection idles over minEvictableIdleTimeMillis spring.datasource.poolPreparedStatements|true| open PSCache spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| specify the size of PSCache on each connection

zookeeper.properties [zookeeper config properties]

ParametersDefault valueDescription
zookeeper.quorumlocalhost:2181ZooKeeper cluster connection info
zookeeper.dolphinscheduler.root/dolphinschedulerDS is stored under ZooKeeper root directory
zookeeper.session.timeout60000session timeout
zookeeper.connection.timeout30000connection timeout
zookeeper.retry.base.sleep100time to wait between subsequent retries
zookeeper.retry.max.sleep30000maximum time to wait between subsequent retries
zookeeper.retry.maxtime10maximum retry times

common.properties [hadoop、s3、yarn config properties]

Currently, common.properties mainly configures Hadoop,s3a related configurations. |Parameters | Default value| Description| |--|--|--| data.basedir.path|/tmp/dolphinscheduler| local directory used to store temp files resource.storage.type|NONE| type of resource files: HDFS, S3, NONE resource.upload.path|/dolphinscheduler| storage path of resource files hadoop.security.authentication.startup.state|false| whether hadoop grant kerberos permission java.security.krb5.conf.path|/opt/krb5.conf|kerberos config directory login.user.keytab.username|hdfs-mycluster@ESZ.COM|kerberos username login.user.keytab.path|/opt/hdfs.headless.keytab|kerberos user keytab kerberos.expire.time|2|kerberos expire time,integer,the unit is hour resource.view.suffixs| txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties| file types supported by resource center hdfs.root.user|hdfs| configure users with corresponding permissions if storage type is HDFS fs.defaultFS|hdfs://mycluster:8020|If resource.storage.type=S3, then the request url would be similar to ‘s3a://dolphinscheduler’. Otherwise if resource.storage.type=HDFS and hadoop supports HA, copy core-site.xml and hdfs-site.xml into ‘conf’ directory fs.s3a.endpoint||s3 endpoint url fs.s3a.access.key||s3 access key fs.s3a.secret.key||s3 secret key yarn.resourcemanager.ha.rm.ids||specify the yarn resourcemanager url. if resourcemanager supports HA, input HA IP addresses (separated by comma), or input null for standalone yarn.application.status.address|http://ds1:8088/ws/v1/cluster/apps/%s|keep default if ResourceManager supports HA or not use ResourceManager, or replace ds1 with corresponding hostname if ResourceManager in standalone mode dolphinscheduler.env.path|env/dolphinscheduler_env.sh|load environment variables configs [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...] development.state|false| specify whether in development state

application-api.properties [API-service log config]

ParametersDefault valueDescription
server.port12345api service communication port
server.servlet.session.timeout7200session timeout
server.servlet.context-path/dolphinschedulerrequest path
spring.servlet.multipart.max-file-size1024MBmaximum file size
spring.servlet.multipart.max-request-size1024MBmaximum request size
server.jetty.max-http-post-size5000000jetty maximum post size
spring.messages.encodingUTF-8message encoding
spring.jackson.time-zoneGMT+8time zone
spring.messages.basenamei18n/messagesi18n config
security.authentication.typePASSWORDauthentication type

master.properties [master-service log config]

ParametersDefault valueDescription
master.listen.port5678master listen port
master.exec.threads100master-service execute thread number, used to limit the number of process instances in parallel
master.exec.task.num20defines the number of parallel tasks for each process instance of the master-service
master.dispatch.task.num3defines the number of dispatch tasks for each batch of the master-service
master.host.selectorLowerWeightmaster host selector, to select a suitable worker to run the task, optional value: random, round-robin, lower weight
master.heartbeat.interval10master heartbeat interval, the unit is second
master.task.commit.retryTimes5master commit task retry times
master.task.commit.interval1000master commit task interval, the unit is millisecond
master.max.cpuload.avg-1master max CPU load avg, only higher than the system CPU load average, master server can schedule. default value -1: the number of CPU cores * 2
master.reserved.memory0.3master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G

worker.properties [worker-service log config]

ParametersDefault valueDescription
worker.listen.port1234worker-service listen port
worker.exec.threads100worker-service execute thread number, used to limit the number of task instances in parallel
worker.heartbeat.interval10worker-service heartbeat interval, the unit is second
worker.max.cpuload.avg-1worker max CPU load avg, only higher than the system CPU load average, worker server can be dispatched tasks. default value -1: the number of CPU cores * 2
worker.reserved.memory0.3worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G
worker.groupsdefaultworker groups separated by comma, e.g., ‘worker.groups=default,test’
worker will join corresponding group according to this config when startup

alert.properties [alert-service log config]

ParametersDefault valueDescription
alert.typeEMAILalter type
mail.protocolSMTPmail server protocol
mail.server.hostxxx.xxx.commail server host
mail.server.port25mail server port
mail.senderxxx@xxx.commail sender email
mail.userxxx@xxx.commail sender email name
mail.passwd111111mail sender email password
mail.smtp.starttls.enabletruespecify mail whether open tls
mail.smtp.ssl.enablefalsespecify mail whether open ssl
mail.smtp.ssl.trustxxx.xxx.comspecify mail ssl trust list
xls.file.path/tmp/xlsmail attachment temp storage directory
following configure WeCom[optional]
enterprise.wechat.enablefalsespecify whether enable WeCom
enterprise.wechat.corp.idxxxxxxxWeCom corp id
enterprise.wechat.secretxxxxxxxWeCom secret
enterprise.wechat.agent.idxxxxxxxWeCom agent id
enterprise.wechat.usersxxxxxxxWeCom users
enterprise.wechat.token.urlhttps://qyapi.weixin.qq.com/cgi-bin/gettoken?
corpid=$corpId&corpsecret=$secret
WeCom token url
enterprise.wechat.push.urlhttps://qyapi.weixin.qq.com/cgi-bin/message/send?
access_token=$token
WeCom push url
enterprise.wechat.user.send.msgsend message format
enterprise.wechat.team.send.msggroup message format
plugin.dir/Users/xx/your/path/to/plugin/dirplugin directory

quartz.properties [quartz config properties]

This part describes quartz configs and configure them based on your practical situation and resources. |Parameters | Default value| Description| |--|--|--| org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.StdJDBCDelegate | org.quartz.jobStore.driverDelegateClass | org.quartz.impl.jdbcjobstore.PostgreSQLDelegate | org.quartz.scheduler.instanceName | DolphinScheduler | org.quartz.scheduler.instanceId | AUTO | org.quartz.scheduler.makeSchedulerThreadDaemon | true | org.quartz.jobStore.useProperties | false | org.quartz.threadPool.class | org.quartz.simpl.SimpleThreadPool | org.quartz.threadPool.makeThreadsDaemons | true | org.quartz.threadPool.threadCount | 25 | org.quartz.threadPool.threadPriority | 5 | org.quartz.jobStore.class | org.quartz.impl.jdbcjobstore.JobStoreTX | org.quartz.jobStore.tablePrefix | QRTZ_ | org.quartz.jobStore.isClustered | true | org.quartz.jobStore.misfireThreshold | 60000 | org.quartz.jobStore.clusterCheckinInterval | 5000 | org.quartz.jobStore.acquireTriggersWithinLock|true | org.quartz.jobStore.dataSource | myDs | org.quartz.dataSource.myDs.connectionProvider.class | org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider |

install_config.conf [DS environment variables configuration script[install or start DS]]

install_config.conf is a bit complicated and is mainly used in the following two places.

  • DS Cluster Auto Installation.

System will load configs in the install_config.conf and auto-configure files below, based on the file content when executing ‘install.sh’. Files such as dolphinscheduler-daemon.sh, datasource.properties, zookeeper.properties, common.properties, application-api.properties, master.properties, worker.properties, alert.properties, quartz.properties, etc.

  • Startup and Shutdown DS Cluster.

The system will load masters, workers, alert-server, API-servers and other parameters inside the file to startup or shutdown DS cluster.

File Content


# Note: please escape the character if the file contains special characters such as `.*[]^${}\+?|()@#&`. # eg: `[` escape to `\[` # Database type (DS currently only supports PostgreSQL and MySQL) dbtype="mysql" # Database url and port dbhost="192.168.xx.xx:3306" # Database name dbname="dolphinscheduler" # Database username username="xx" # Database password password="xx" # ZooKeeper url zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181" # DS installation path, such as '/data1_1T/dolphinscheduler' installPath="/data1_1T/dolphinscheduler" # Deployment user # Note: Deployment user needs 'sudo' privilege and has rights to operate HDFS. # Root directory must be created by the same user if using HDFS, otherwise permission related issues will be raised. deployUser="dolphinscheduler" # Followings are alert-service configs # Mail server host mailServerHost="smtp.exmail.qq.com" # Mail server port mailServerPort="25" # Mail sender mailSender="xxxxxxxxxx" # Mail user mailUser="xxxxxxxxxx" # Mail password mailPassword="xxxxxxxxxx" # Whether mail supports TLS starttlsEnable="true" # Whether mail supports SSL. Note: starttlsEnable and sslEnable cannot both set true. sslEnable="false" # Mail server host, same as mailServerHost sslTrust="smtp.exmail.qq.com" # Specify which resource upload function to use for resources storage, such as sql files. And supported options are HDFS, S3 and NONE. HDFS for upload to HDFS and NONE for not using this function. resourceStorageType="NONE" # if S3, write S3 address. HA, for example: s3a://dolphinscheduler, # Note: s3 make sure to create the root directory /dolphinscheduler defaultFS="hdfs://mycluster:8020" # If parameter 'resourceStorageType' is S3, following configs are needed: s3Endpoint="http://192.168.xx.xx:9010" s3AccessKey="xxxxxxxxxx" s3SecretKey="xxxxxxxxxx" # If ResourceManager supports HA, then input master and standby node IP or hostname, eg: '192.168.xx.xx,192.168.xx.xx'. Or else ResourceManager run in standalone mode, please set yarnHaIps="" and "" for not using yarn. yarnHaIps="192.168.xx.xx,192.168.xx.xx" # If ResourceManager runs in standalone, then set ResourceManager node ip or hostname, or else remain default. singleYarnIp="yarnIp1" # Storage path when using HDFS/S3 resourceUploadPath="/dolphinscheduler" # HDFS/S3 root user hdfsRootUser="hdfs" # Followings are Kerberos configs # Specify Kerberos enable or not kerberosStartUp="false" # Kdc krb5 config file path krb5ConfPath="$installPath/conf/krb5.conf" # Keytab username keytabUserName="hdfs-mycluster@ESZ.COM" # Username keytab path keytabPath="$installPath/conf/hdfs.headless.keytab" # API-service port apiServerPort="12345" # All hosts deploy DS ips="ds1,ds2,ds3,ds4,ds5" # Ssh port, default 22 sshPort="22" # Master service hosts masters="ds1,ds2" # All hosts deploy worker service # Note: Each worker needs to set a worker group name and default name is "default" workers="ds1:default,ds2:default,ds3:default,ds4:default,ds5:default" # Host deploy alert-service alertServer="ds3" # Host deploy API-service apiServers="ds1"

dolphinscheduler_env.sh [load environment variables configs]

When using shell to commit tasks, DS will load environment variables inside dolphinscheduler_env.sh into the host. Types of tasks involved are: Shell, Python, Spark, Flink, DataX, etc.

export HADOOP_HOME=/opt/soft/hadoop
export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
export SPARK_HOME1=/opt/soft/spark1
export SPARK_HOME2=/opt/soft/spark2
export PYTHON_HOME=/opt/soft/python
export JAVA_HOME=/opt/soft/java
export HIVE_HOME=/opt/soft/hive
export FLINK_HOME=/opt/soft/flink
export DATAX_HOME=/opt/soft/datax/bin/datax.py

export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH

Services logback configs

Services namelogback config name
API-service logback configlogback-api.xml
master-service logback configlogback-master.xml
worker-service logback configlogback-worker.xml
alert-service logback configlogback-alert.xml