For Griffin users, you can deploy it with some dependencies in your environment, you can follow instructions below.
You need to install following items
$SPARK_HOME/lib/
, and put them into HDFS.datanucleus-api-jdo-3.2.6.jar datanucleus-core-3.2.10.jar datanucleus-rdbms-3.2.9.jar
Create database ‘quartz’ in postgresql
createdb -O <username> quartz
Init quartz tables in postgresql by init_quartz.sql
psql -p <password> -h <host address> -U <username> -f init_quartz.sql quartz
Create database ‘quartz’ in mysql
mysql -u <username> -e "create database quartz" -p
Init quartz tables in mysql by init_quartz.sql
mysql -u <username> -p quartz < init_quartz.sql
You should also modify some configurations of Griffin for your environment.
service/src/main/resources/application.properties
# jpa spring.datasource.url = jdbc:postgresql://<your IP>:5432/quartz?autoReconnect=true&useSSL=false spring.datasource.username = <user name> spring.datasource.password = <password> spring.jpa.generate-ddl=true spring.datasource.driverClassName = org.postgresql.Driver spring.jpa.show-sql = true # hive metastore hive.metastore.uris = thrift://<your IP>:9083 hive.metastore.dbname = <hive database name> # default is "default" # external properties directory location, ignore it if not required external.config.location = # login strategy, default is "default" login.strategy = <default or ldap> # ldap properties, ignore them if ldap is not enabled ldap.url = ldap://hostname:port ldap.email = @example.com ldap.searchBase = DC=org,DC=example ldap.searchPattern = (sAMAccountName={0}) # hdfs, ignore it if you do not need predicate job fs.defaultFS = hdfs://<hdfs-default-name> # elasticsearch elasticsearch.host = <your IP> elasticsearch.port = <your elasticsearch rest port> # authentication properties, uncomment if basic authentication is enabled # elasticsearch.user = user # elasticsearch.password = password
measure/src/main/resources/env.json
"persist": [ ... { "type": "http", "config": { "method": "post", "api": "http://<your ES IP>:<ES rest port>/griffin/accuracy" } } ]
Put the modified env.json file into HDFS.
service/src/main/resources/sparkJob.properties
sparkJob.file = hdfs://<griffin measure path>/griffin-measure.jar sparkJob.args_1 = hdfs://<griffin env path>/env.json # other dependent jars sparkJob.jars = # hive-site.xml location, as configured in spark conf if ignored here spark.yarn.dist.files = livy.uri = http://<your IP>:8998/batches spark.uri = http://<your IP>:8088
Build the whole project and deploy. (NPM should be installed)
mvn clean install
Put jar file of measure module into <griffin measure path> in HDFS
cp measure/target/measure-<version>-incubating-SNAPSHOT.jar measure/target/griffin-measure.jar hdfs dfs -put measure/target/griffin-measure.jar <griffin measure path>/
After all environment services startup, we can start our server.
java -jar service/target/service.jar
After a few seconds, we can visit our default UI of Griffin (by default the port of spring boot is 8080).
http://<your IP>:8080
You can use UI following the steps here.
Note: The UI doesn't support all the features, for the advanced features you can try API of service.