[GRIFFIN-312] Code Style Standardization

**What changes were proposed in this pull request?**

This PR targets the following,
- fix the various warnings during build and in source code,
- perform code formatting as per a standard style,
- fix scalastyle integration

Using the [code format from spark code style can be automatically imposed on measure module. Link: https://github.com/apache/spark/blob/master/dev/.scalafmt.conf

Since ScalaStyle targets scala source code only, it should be a part of the measure module only. Current misconfiguration is also suppressing the formatting errors

Scalafmt is used for code formatting.

**Does this PR introduce any user-facing change?**
No

**How was this patch tested?**
Griffin test suite.

Author: chitralverma <chitralverma@gmail.com>

Closes #560 from chitralverma/code-style-standardization.
diff --git a/.scalafmt.conf b/.scalafmt.conf
new file mode 100644
index 0000000..d2196e6
--- /dev/null
+++ b/.scalafmt.conf
@@ -0,0 +1,27 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+align = none
+align.openParenDefnSite = false
+align.openParenCallSite = false
+align.tokens = []
+optIn = {
+  configStyleArguments = false
+}
+danglingParentheses = false
+docstrings = JavaDoc
+maxColumn = 98
diff --git a/.travis.yml b/.travis.yml
index 0fa0042..319902c 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -42,5 +42,5 @@
 script:
   - export MAVEN_SKIP_RC=true
   - export MAVEN_OPTS=" -Xmx1g -XX:ReservedCodeCacheSize=128m -Dorg.slf4j.simpleLogger.defaultLogLevel=warn -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn"
-  - mvn -B clean scalastyle:check test -Dlogging.level.org.springframework=WARN
+  - mvn -B clean test -Dlogging.level.org.springframework=WARN
 
diff --git a/griffin-doc/dev/code-style.md b/griffin-doc/dev/code-style.md
index c21abbb..7968a7e 100644
--- a/griffin-doc/dev/code-style.md
+++ b/griffin-doc/dev/code-style.md
@@ -18,9 +18,13 @@
 -->
 
 # Apache Griffin Development Code Style Config Guide
-Griffin product consists of three main parts, they are developed and built on Angular 2/Java/Scala technique.
-So it's necessary to define a bunch of code style rules, all codes need to follow those rules and
-keep any codes submitted by various committers consistent.
+Apache Griffin consists of three main modules - Measure, Services and UI. These are developed using Scala, Java and Angular 2 respectively.
+
+Quoting the Databricks-Apache Spark Code Style Guide at [link](https://github.com/databricks/scala-style-guide) :
+> Code is written once by its author, but read and modified multiple times by lots of other engineers. As most bugs actually come from future modification of the code, we need to optimize our codebase for long-term, global readability and maintainability. The best way to achieve this is to write simple code.
+
+So it's necessary to define a bunch of code style rules, which are to be followed by all contributors and committers so as ot keep the code base consistent.
+
 
 ## Config Java Code Style
 We suggest developers use automatic tools to check code violation like [CheckStyle](https://github.com/checkstyle/checkstyle).<br>
@@ -309,8 +313,59 @@
         ![idea check-run](../img/devguide/check-run-idea.png)
 
 
-## Config Scala Code Style
-to do
+## Scala Code Style Guide
+
+Since only Measure module is written in Scala, this guide applies to only the measure module. 
+For Scala code, Apache Griffin follows the official [Scala style guide](https://docs.scala-lang.org/style/) 
+and [Apache Spark Scala guide](https://github.com/databricks/scala-style-guide).
+
+#### Guide Update History
+| Version | Date       | Changes |
+|:-------:|:-----------|:--------|
+| 1       | 2019-12-24 | Initial version: Added Scala Code Style Guide |
+
+#### Configuration
+For code style checks Apache Griffin uses [Scalastyle](http://www.scalastyle.org/) and formatting is handled 
+by [Scalafmt](https://scalameta.org/scalafmt/).
+
+Configurations for both Scalastyle and Scalafmt are present in the root of Apache Griffin repository.
+ - Scalastyle Config: `scalastyle-config.xml`
+ - Scalafmt Config: `.scalafmt.conf`
+ 
+####  Formatting and Style Check
+ 
+Although automatic formatting and checks are built into the build steps of Apache Griffin, 
+to manually format Scala code and check for possible style violations, run the following commands.
+
+```
+## assuming current working directory is griffin root
+
+cd measure
+mvn clean verify -DskipTests
+```
+
+####  IntelliJ IDEA Setup
+
+Navigate to _Settings > Editor > Code Style > Scala_ and follow the screenshots below to ensure coding standards
+ in the IntelliJ IDEA.
+![scala_code_style_1](../img/devguide/scala-intellij-idea/1.png)
+![scala_code_style_2](../img/devguide/scala-intellij-idea/2.png)
+![scala_code_style_3](../img/devguide/scala-intellij-idea/3.png)
+![scala_code_style_4](../img/devguide/scala-intellij-idea/4.png)
+![scala_code_style_5](../img/devguide/scala-intellij-idea/5.png)
+![scala_code_style_6](../img/devguide/scala-intellij-idea/6.png)
+
+
+#### Important Notes
+1. [Guide Update History](#guide-update-history) section must be updated each time any section of 
+Scala Code Style Guide is edited.
+2. Formatting and code style checks must be ensured each time before submitting or updating a pull request as 
+violations will cause build failures.
+3. If you’re not sure about the right style for something, try to follow the style of the existing codebase. 
+Look at whether there are other examples in the code that use your feature. 
+
+In case of any queries or dilemmas feel free to reach out to the other contributors and committers 
+on the [dev@griffin.apache.org](mailto:dev@griffin.apache.org) list.
 
 ## Config Angular 2 Code Style
 to do
diff --git a/griffin-doc/img/devguide/scala-intellij-idea/1.png b/griffin-doc/img/devguide/scala-intellij-idea/1.png
new file mode 100644
index 0000000..741a3ee
--- /dev/null
+++ b/griffin-doc/img/devguide/scala-intellij-idea/1.png
Binary files differ
diff --git a/griffin-doc/img/devguide/scala-intellij-idea/2.png b/griffin-doc/img/devguide/scala-intellij-idea/2.png
new file mode 100644
index 0000000..689ea81
--- /dev/null
+++ b/griffin-doc/img/devguide/scala-intellij-idea/2.png
Binary files differ
diff --git a/griffin-doc/img/devguide/scala-intellij-idea/3.png b/griffin-doc/img/devguide/scala-intellij-idea/3.png
new file mode 100644
index 0000000..fc6cb26
--- /dev/null
+++ b/griffin-doc/img/devguide/scala-intellij-idea/3.png
Binary files differ
diff --git a/griffin-doc/img/devguide/scala-intellij-idea/4.png b/griffin-doc/img/devguide/scala-intellij-idea/4.png
new file mode 100644
index 0000000..29a166d
--- /dev/null
+++ b/griffin-doc/img/devguide/scala-intellij-idea/4.png
Binary files differ
diff --git a/griffin-doc/img/devguide/scala-intellij-idea/5.png b/griffin-doc/img/devguide/scala-intellij-idea/5.png
new file mode 100644
index 0000000..d71d233
--- /dev/null
+++ b/griffin-doc/img/devguide/scala-intellij-idea/5.png
Binary files differ
diff --git a/griffin-doc/img/devguide/scala-intellij-idea/6.png b/griffin-doc/img/devguide/scala-intellij-idea/6.png
new file mode 100644
index 0000000..f364038
--- /dev/null
+++ b/griffin-doc/img/devguide/scala-intellij-idea/6.png
Binary files differ
diff --git a/measure/pom.xml b/measure/pom.xml
index 66ebe4c..ce198bb 100644
--- a/measure/pom.xml
+++ b/measure/pom.xml
@@ -17,7 +17,8 @@
 specific language governing permissions and limitations
 under the License.
 -->
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
     <modelVersion>4.0.0</modelVersion>
 
     <parent>
@@ -47,6 +48,9 @@
         <mockito.version>1.10.19</mockito.version>
         <mysql.java.version>5.1.47</mysql.java.version>
         <cassandra.connector.version>2.4.1</cassandra.connector.version>
+        <scalastyle.version>1.0.0</scalastyle.version>
+        <scalafmt.parameters>--diff --test</scalafmt.parameters>
+        <scalafmt.skip>false</scalafmt.skip>
     </properties>
 
     <dependencies>
@@ -200,6 +204,8 @@
     </dependencies>
 
     <build>
+        <sourceDirectory>src/main/scala</sourceDirectory>
+        <testSourceDirectory>src/test/scala</testSourceDirectory>
         <plugins>
             <plugin>
                 <groupId>net.alchim31.maven</groupId>
@@ -217,6 +223,11 @@
                 </executions>
                 <configuration>
                     <scalaVersion>${scala.version}</scalaVersion>
+                    <args>
+                        <arg>-deprecation</arg>
+                        <arg>-feature</arg>
+                        <arg>-unchecked</arg>
+                    </args>
                 </configuration>
             </plugin>
             <plugin>
@@ -242,7 +253,8 @@
                         </goals>
                         <configuration>
                             <transformers>
-                                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
+                                <transformer
+                                        implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                     <mainClass>org.apache.griffin.measure.Application</mainClass>
                                 </transformer>
                             </transformers>
@@ -264,6 +276,51 @@
                     </execution>
                 </executions>
             </plugin>
+            <plugin>
+                <groupId>org.antipathy</groupId>
+                <artifactId>mvn-scalafmt_${scala.binary.version}</artifactId>
+                <version>0.12_1.5.1</version>
+                <configuration>
+                    <parameters>${scalafmt.parameters}</parameters>
+                    <skip>${scalafmt.skip}</skip>
+                    <skipSources>${scalafmt.skip}</skipSources>
+                    <skipTestSources>${scalafmt.skip}</skipTestSources>
+                    <configLocation>${project.parent.basedir}/.scalafmt.conf</configLocation>
+                </configuration>
+                <executions>
+                    <execution>
+                        <phase>validate</phase>
+                        <goals>
+                            <goal>format</goal>
+                        </goals>
+                    </execution>
+                </executions>
+            </plugin>
+            <plugin>
+                <groupId>org.scalastyle</groupId>
+                <artifactId>scalastyle-maven-plugin</artifactId>
+                <version>${scalastyle.version}</version>
+                <configuration>
+                    <verbose>false</verbose>
+                    <failOnViolation>true</failOnViolation>
+                    <includeTestSourceDirectory>false</includeTestSourceDirectory>
+                    <failOnWarning>false</failOnWarning>
+                    <sourceDirectory>${project.build.sourceDirectory}</sourceDirectory>
+                    <testSourceDirectory>${project.build.testSourceDirectory}</testSourceDirectory>
+                    <configLocation>${project.parent.basedir}/scalastyle-config.xml</configLocation>
+                    <outputFile>${basedir}/target/scalastyle-output.xml</outputFile>
+                    <inputEncoding>${project.build.sourceEncoding}</inputEncoding>
+                    <outputEncoding>${project.reporting.outputEncoding}</outputEncoding>
+                </configuration>
+                <executions>
+                    <execution>
+                        <phase>validate</phase>
+                        <goals>
+                            <goal>check</goal>
+                        </goals>
+                    </execution>
+                </executions>
+            </plugin>
         </plugins>
     </build>
 </project>
diff --git a/measure/src/main/scala/org/apache/griffin/measure/Application.scala b/measure/src/main/scala/org/apache/griffin/measure/Application.scala
index deb2781..88bcc16 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/Application.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/Application.scala
@@ -20,7 +20,12 @@
 import scala.reflect.ClassTag
 import scala.util.{Failure, Success, Try}
 
-import org.apache.griffin.measure.configuration.dqdefinition.{DQConfig, EnvConfig, GriffinConfig, Param}
+import org.apache.griffin.measure.configuration.dqdefinition.{
+  DQConfig,
+  EnvConfig,
+  GriffinConfig,
+  Param
+}
 import org.apache.griffin.measure.configuration.dqdefinition.reader.ParamReaderFactory
 import org.apache.griffin.measure.configuration.enums.ProcessType
 import org.apache.griffin.measure.configuration.enums.ProcessType._
@@ -28,10 +33,9 @@
 import org.apache.griffin.measure.launch.batch.BatchDQApp
 import org.apache.griffin.measure.launch.streaming.StreamingDQApp
 
-
 /**
-  * application entrance
-  */
+ * application entrance
+ */
 object Application extends Loggable {
 
   def main(args: Array[String]): Unit = {
@@ -68,11 +72,11 @@
       case BatchProcessType => BatchDQApp(allParam)
       case StreamingProcessType => StreamingDQApp(allParam)
       case _ =>
-        error(s"${procType} is unsupported process type!")
+        error(s"$procType is unsupported process type!")
         sys.exit(-4)
     }
 
-    startup
+    startup()
 
     // dq app init
     dqApp.init match {
@@ -80,7 +84,7 @@
         info("process init success")
       case Failure(ex) =>
         error(s"process init error: ${ex.getMessage}", ex)
-        shutdown
+        shutdown()
         sys.exit(-5)
     }
 
@@ -96,7 +100,7 @@
         if (dqApp.retryable) {
           throw ex
         } else {
-          shutdown
+          shutdown()
           sys.exit(-5)
         }
     }
@@ -107,26 +111,24 @@
         info("process end success")
       case Failure(ex) =>
         error(s"process end error: ${ex.getMessage}", ex)
-        shutdown
+        shutdown()
         sys.exit(-5)
     }
 
-    shutdown
+    shutdown()
 
     if (!success) {
       sys.exit(-5)
     }
   }
 
-  def readParamFile[T <: Param](file: String)(implicit m : ClassTag[T]): Try[T] = {
+  def readParamFile[T <: Param](file: String)(implicit m: ClassTag[T]): Try[T] = {
     val paramReader = ParamReaderFactory.getParamReader(file)
     paramReader.readConfig[T]
   }
 
-  private def startup(): Unit = {
-  }
+  private def startup(): Unit = {}
 
-  private def shutdown(): Unit = {
-  }
+  private def shutdown(): Unit = {}
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/Loggable.scala b/measure/src/main/scala/org/apache/griffin/measure/Loggable.scala
index 87d930c..b53f6a3 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/Loggable.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/Loggable.scala
@@ -24,9 +24,9 @@
 
   @transient private lazy val logger = Logger.getLogger(getClass)
 
-  @transient protected lazy val griffinLogger = Logger.getLogger("org.apache.griffin")
+  @transient protected lazy val griffinLogger: Logger = Logger.getLogger("org.apache.griffin")
 
-  def getGriffinLogLevel(): Level = {
+  def getGriffinLogLevel: Level = {
     var logger = griffinLogger
     while (logger != null && logger.getLevel == null) {
       logger = logger.getParent.asInstanceOf[Logger]
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/DQConfig.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/DQConfig.scala
index afa1822..993f432 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/DQConfig.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/DQConfig.scala
@@ -21,144 +21,162 @@
 import com.fasterxml.jackson.annotation.JsonInclude.Include
 import org.apache.commons.lang.StringUtils
 
-import org.apache.griffin.measure.configuration.enums.{DqType, DslType, FlattenType, OutputType, SinkType}
+import org.apache.griffin.measure.configuration.enums.{
+  DqType,
+  DslType,
+  FlattenType,
+  OutputType,
+  SinkType
+}
 import org.apache.griffin.measure.configuration.enums.DqType._
 import org.apache.griffin.measure.configuration.enums.DslType.{DslType, GriffinDsl}
-import org.apache.griffin.measure.configuration.enums.FlattenType.{DefaultFlattenType, FlattenType}
+import org.apache.griffin.measure.configuration.enums.FlattenType.{
+  DefaultFlattenType,
+  FlattenType
+}
 import org.apache.griffin.measure.configuration.enums.OutputType.{OutputType, UnknownOutputType}
 import org.apache.griffin.measure.configuration.enums.SinkType.SinkType
 
 /**
-  * dq param
-  * @param name           name of dq measurement (must)
-  * @param timestamp      default timestamp of measure in batch mode (optional)
-  * @param procType       batch mode or streaming mode (must)
-  * @param dataSources    data sources (must)
-  * @param evaluateRule   dq measurement (must)
-  * @param sinks          sink types (optional, by default will be elasticsearch)
-  */
+ * dq param
+ * @param name           name of dq measurement (must)
+ * @param timestamp      default timestamp of measure in batch mode (optional)
+ * @param procType       batch mode or streaming mode (must)
+ * @param dataSources    data sources (must)
+ * @param evaluateRule   dq measurement (must)
+ * @param sinks          sink types (optional, by default will be elasticsearch)
+ */
 @JsonInclude(Include.NON_NULL)
-case class DQConfig(@JsonProperty("name") private val name: String,
-                    @JsonProperty("timestamp") private val timestamp: Long,
-                    @JsonProperty("process.type") private val procType: String,
-                    @JsonProperty("data.sources") private val dataSources: List[DataSourceParam],
-                    @JsonProperty("evaluate.rule") private val evaluateRule: EvaluateRuleParam,
-                    @JsonProperty("sinks") private val sinks: List[String]
-                  ) extends Param {
+case class DQConfig(
+    @JsonProperty("name") private val name: String,
+    @JsonProperty("timestamp") private val timestamp: Long,
+    @JsonProperty("process.type") private val procType: String,
+    @JsonProperty("data.sources") private val dataSources: List[DataSourceParam],
+    @JsonProperty("evaluate.rule") private val evaluateRule: EvaluateRuleParam,
+    @JsonProperty("sinks") private val sinks: List[String])
+    extends Param {
   def getName: String = name
   def getTimestampOpt: Option[Long] = if (timestamp != 0) Some(timestamp) else None
   def getProcType: String = procType
   def getDataSources: Seq[DataSourceParam] = {
-    dataSources.foldLeft((Nil: Seq[DataSourceParam], Set[String]())) { (ret, ds) =>
-      val (seq, names) = ret
-      if (!names.contains(ds.getName)) {
-        (seq :+ ds, names + ds.getName)
-      } else ret
-    }._1
+    dataSources
+      .foldLeft((Nil: Seq[DataSourceParam], Set[String]())) { (ret, ds) =>
+        val (seq, names) = ret
+        if (!names.contains(ds.getName)) {
+          (seq :+ ds, names + ds.getName)
+        } else ret
+      }
+      ._1
   }
   def getEvaluateRule: EvaluateRuleParam = evaluateRule
-  def getValidSinkTypes: Seq[SinkType] = SinkType.validSinkTypes(if (sinks != null) sinks else Nil)
+  def getValidSinkTypes: Seq[SinkType] =
+    SinkType.validSinkTypes(if (sinks != null) sinks else Nil)
 
   def validate(): Unit = {
     assert(StringUtils.isNotBlank(name), "dq config name should not be blank")
     assert(StringUtils.isNotBlank(procType), "process.type should not be blank")
-    assert((dataSources != null), "data.sources should not be null")
-    assert((evaluateRule != null), "evaluate.rule should not be null")
-    getDataSources.foreach(_.validate)
-    evaluateRule.validate
+    assert(dataSources != null, "data.sources should not be null")
+    assert(evaluateRule != null, "evaluate.rule should not be null")
+    getDataSources.foreach(_.validate())
+    evaluateRule.validate()
   }
 }
 
 /**
-  * data source param
-  * @param name         data source name (must)
-  * @param baseline     data source is baseline or not, false by default (optional)
-  * @param connectors   data connectors (optional)
-  * @param checkpoint   data source checkpoint configuration (must in streaming mode with streaming connectors)
-  */
+ * data source param
+ * @param name         data source name (must)
+ * @param baseline     data source is baseline or not, false by default (optional)
+ * @param connectors   data connectors (optional)
+ * @param checkpoint   data source checkpoint configuration (must in streaming mode with streaming connectors)
+ */
 @JsonInclude(Include.NON_NULL)
-case class DataSourceParam( @JsonProperty("name") private val name: String,
-                            @JsonProperty("connectors") private val connectors: List[DataConnectorParam],
-                            @JsonProperty("baseline") private val baseline: Boolean = false,
-                            @JsonProperty("checkpoint") private val checkpoint: Map[String, Any] = null
-                          ) extends Param {
+case class DataSourceParam(
+    @JsonProperty("name") private val name: String,
+    @JsonProperty("connectors") private val connectors: List[DataConnectorParam],
+    @JsonProperty("baseline") private val baseline: Boolean = false,
+    @JsonProperty("checkpoint") private val checkpoint: Map[String, Any] = null)
+    extends Param {
   def getName: String = name
-  def isBaseline: Boolean = if (!baseline.equals(null)) baseline else false
+  def isBaseline: Boolean = if (Option(baseline).isDefined) baseline else false
   def getConnectors: Seq[DataConnectorParam] = if (connectors != null) connectors else Nil
-  def getCheckpointOpt: Option[Map[String, Any]] = if (checkpoint != null) Some(checkpoint) else None
+  def getCheckpointOpt: Option[Map[String, Any]] = Option(checkpoint)
 
   def validate(): Unit = {
     assert(StringUtils.isNotBlank(name), "data source name should not be empty")
-    getConnectors.foreach(_.validate)
+    getConnectors.foreach(_.validate())
   }
 }
 
 /**
-  * data connector param
-  * @param conType    data connector type, e.g.: hive, avro, kafka (must)
-  * @param version    data connector type version (optional)
-  * @param dataFrameName    data connector dataframe name, for pre-process input usage (optional)
-  * @param config     detail configuration of data connector (must)
-  * @param preProc    pre-process rules after load data (optional)
-  */
+ * data connector param
+ * @param conType    data connector type, e.g.: hive, avro, kafka (must)
+ * @param version    data connector type version (optional)
+ * @param dataFrameName    data connector dataframe name, for pre-process input usage (optional)
+ * @param config     detail configuration of data connector (must)
+ * @param preProc    pre-process rules after load data (optional)
+ */
 @JsonInclude(Include.NON_NULL)
-case class DataConnectorParam( @JsonProperty("type") private val conType: String,
-                               @JsonProperty("version") private val version: String,
-                               @JsonProperty("dataframe.name") private val dataFrameName: String,
-                               @JsonProperty("config") private val config: Map[String, Any],
-                               @JsonProperty("pre.proc") private val preProc: List[RuleParam]
-                             ) extends Param {
+case class DataConnectorParam(
+    @JsonProperty("type") private val conType: String,
+    @JsonProperty("version") private val version: String,
+    @JsonProperty("dataframe.name") private val dataFrameName: String,
+    @JsonProperty("config") private val config: Map[String, Any],
+    @JsonProperty("pre.proc") private val preProc: List[RuleParam])
+    extends Param {
   def getType: String = conType
   def getVersion: String = if (version != null) version else ""
-  def getDataFrameName(defName: String): String = if (dataFrameName != null) dataFrameName else defName
+  def getDataFrameName(defName: String): String =
+    if (dataFrameName != null) dataFrameName else defName
   def getConfig: Map[String, Any] = if (config != null) config else Map[String, Any]()
   def getPreProcRules: Seq[RuleParam] = if (preProc != null) preProc else Nil
 
   def validate(): Unit = {
     assert(StringUtils.isNotBlank(conType), "data connector type should not be empty")
-    getPreProcRules.foreach(_.validate)
+    getPreProcRules.foreach(_.validate())
   }
 }
 
 /**
-  * evaluate rule param
-  * @param rules      rules to define dq measurement (optional)
-  */
+ * evaluate rule param
+ * @param rules      rules to define dq measurement (optional)
+ */
 @JsonInclude(Include.NON_NULL)
-case class EvaluateRuleParam( @JsonProperty("rules") private val rules: List[RuleParam]
-                            ) extends Param {
+case class EvaluateRuleParam(@JsonProperty("rules") private val rules: List[RuleParam])
+    extends Param {
   def getRules: Seq[RuleParam] = if (rules != null) rules else Nil
 
   def validate(): Unit = {
-    getRules.foreach(_.validate)
+    getRules.foreach(_.validate())
   }
 }
 
 /**
-  * rule param
-  * @param dslType    dsl type of this rule (must)
-  * @param dqType     dq type of this rule (must if dsl type is "griffin-dsl")
-  * @param inDfName   name of input dataframe of this rule, by default will be the previous rule output dataframe name
-  * @param outDfName  name of output dataframe of this rule, by default will be generated
-  *                   as data connector dataframe name with index suffix
-  * @param rule       rule to define dq step calculation (must)
-  * @param details    detail config of rule (optional)
-  * @param cache      cache the result for multiple usage (optional, valid for "spark-sql" and "df-ops" mode)
-  * @param outputs    output ways configuration (optional)
-  * @param errorConfs error configuration (valid for 'COMPLETENESS' mode)
-  */
+ * rule param
+ * @param dslType    dsl type of this rule (must)
+ * @param dqType     dq type of this rule (must if dsl type is "griffin-dsl")
+ * @param inDfName   name of input dataframe of this rule, by default will be the previous rule output dataframe name
+ * @param outDfName  name of output dataframe of this rule, by default will be generated
+ *                   as data connector dataframe name with index suffix
+ * @param rule       rule to define dq step calculation (must)
+ * @param details    detail config of rule (optional)
+ * @param cache      cache the result for multiple usage (optional, valid for "spark-sql" and "df-ops" mode)
+ * @param outputs    output ways configuration (optional)
+ * @param errorConfs error configuration (valid for 'COMPLETENESS' mode)
+ */
 @JsonInclude(Include.NON_NULL)
-case class RuleParam(@JsonProperty("dsl.type") private val dslType: String,
-                     @JsonProperty("dq.type") private val dqType: String,
-                     @JsonProperty("in.dataframe.name") private val inDfName: String = null,
-                     @JsonProperty("out.dataframe.name") private val outDfName: String = null,
-                     @JsonProperty("rule") private val rule: String = null,
-                     @JsonProperty("details") private val details: Map[String, Any] = null,
-                     @JsonProperty("cache") private val cache: Boolean = false,
-                     @JsonProperty("out") private val outputs: List[RuleOutputParam] = null,
-                     @JsonProperty("error.confs") private val errorConfs: List[RuleErrorConfParam] = null
-                    ) extends Param {
-  def getDslType: DslType = if (dslType != null) DslType.withNameWithDefault(dslType) else GriffinDsl
+case class RuleParam(
+    @JsonProperty("dsl.type") private val dslType: String,
+    @JsonProperty("dq.type") private val dqType: String,
+    @JsonProperty("in.dataframe.name") private val inDfName: String = null,
+    @JsonProperty("out.dataframe.name") private val outDfName: String = null,
+    @JsonProperty("rule") private val rule: String = null,
+    @JsonProperty("details") private val details: Map[String, Any] = null,
+    @JsonProperty("cache") private val cache: Boolean = false,
+    @JsonProperty("out") private val outputs: List[RuleOutputParam] = null,
+    @JsonProperty("error.confs") private val errorConfs: List[RuleErrorConfParam] = null)
+    extends Param {
+  def getDslType: DslType =
+    if (dslType != null) DslType.withNameWithDefault(dslType) else GriffinDsl
   def getDqType: DqType = if (dqType != null) DqType.withNameWithDefault(dqType) else Unknown
   def getCache: Boolean = if (cache) cache else false
 
@@ -168,7 +186,8 @@
   def getDetails: Map[String, Any] = if (details != null) details else Map[String, Any]()
 
   def getOutputs: Seq[RuleOutputParam] = if (outputs != null) outputs else Nil
-  def getOutputOpt(tp: OutputType): Option[RuleOutputParam] = getOutputs.filter(_.getOutputType == tp).headOption
+  def getOutputOpt(tp: OutputType): Option[RuleOutputParam] =
+    getOutputs.find(_.getOutputType == tp)
 
   def getErrorConfs: Seq[RuleErrorConfParam] = if (errorConfs != null) errorConfs else Nil
 
@@ -190,30 +209,32 @@
   }
 
   def validate(): Unit = {
-    assert(!(getDslType.equals(GriffinDsl) && getDqType.equals(Unknown)),
+    assert(
+      !(getDslType.equals(GriffinDsl) && getDqType.equals(Unknown)),
       "unknown dq type for griffin dsl")
 
-    getOutputs.foreach(_.validate)
-    getErrorConfs.foreach(_.validate)
+    getOutputs.foreach(_.validate())
+    getErrorConfs.foreach(_.validate())
   }
 }
 
 /**
-  * out param of rule
-  * @param outputType     output type (must)
-  * @param name           output name (optional)
-  * @param flatten        flatten type of output metric (optional, available in output metric type)
-  */
+ * out param of rule
+ * @param outputType     output type (must)
+ * @param name           output name (optional)
+ * @param flatten        flatten type of output metric (optional, available in output metric type)
+ */
 @JsonInclude(Include.NON_NULL)
-case class RuleOutputParam( @JsonProperty("type") private val outputType: String,
-                            @JsonProperty("name") private val name: String,
-                            @JsonProperty("flatten") private val flatten: String
-                          ) extends Param {
+case class RuleOutputParam(
+    @JsonProperty("type") private val outputType: String,
+    @JsonProperty("name") private val name: String,
+    @JsonProperty("flatten") private val flatten: String)
+    extends Param {
   def getOutputType: OutputType = {
     if (outputType != null) OutputType.withNameWithDefault(outputType)
     else UnknownOutputType
   }
-  def getNameOpt: Option[String] = if (StringUtils.isNotBlank(name)) Some(name) else None
+  def getNameOpt: Option[String] = Some(name).filter(StringUtils.isNotBlank)
   def getFlatten: FlattenType = {
     if (StringUtils.isNotBlank(flatten)) FlattenType.withNameWithDefault(flatten)
     else DefaultFlattenType
@@ -223,22 +244,25 @@
 }
 
 /**
-  * error configuration parameter
-  * @param columnName the name of the column
-  * @param errorType  the way to match error, regex or enumeration
-  * @param values     error value list
-  */
+ * error configuration parameter
+ * @param columnName the name of the column
+ * @param errorType  the way to match error, regex or enumeration
+ * @param values     error value list
+ */
 @JsonInclude(Include.NON_NULL)
-case class RuleErrorConfParam( @JsonProperty("column.name") private val columnName: String,
-                               @JsonProperty("type") private val errorType: String,
-                               @JsonProperty("values") private val values: List[String]
-                             ) extends Param {
-  def getColumnName: Option[String] = if (StringUtils.isNotBlank(columnName)) Some(columnName) else None
-  def getErrorType: Option[String] = if (StringUtils.isNotBlank(errorType)) Some(errorType) else None
+case class RuleErrorConfParam(
+    @JsonProperty("column.name") private val columnName: String,
+    @JsonProperty("type") private val errorType: String,
+    @JsonProperty("values") private val values: List[String])
+    extends Param {
+  def getColumnName: Option[String] = Some(columnName).filter(StringUtils.isNotBlank)
+  def getErrorType: Option[String] = Some(errorType).filter(StringUtils.isNotBlank)
   def getValues: Seq[String] = if (values != null) values else Nil
 
   def validate(): Unit = {
-    assert("regex".equalsIgnoreCase(getErrorType.get) ||
-      "enumeration".equalsIgnoreCase(getErrorType.get), "error error.conf type")
+    assert(
+      "regex".equalsIgnoreCase(getErrorType.get) ||
+        "enumeration".equalsIgnoreCase(getErrorType.get),
+      "error error.conf type")
   }
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/EnvConfig.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/EnvConfig.scala
index 4c5f937..50468aa 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/EnvConfig.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/EnvConfig.scala
@@ -25,45 +25,48 @@
 import org.apache.griffin.measure.configuration.enums.SinkType.SinkType
 
 /**
-  * environment param
-  * @param sparkParam         config of spark environment (must)
-  * @param sinkParams         config of sink ways (optional)
-  * @param checkpointParams   config of checkpoint locations (required in streaming mode)
-  */
+ * environment param
+ * @param sparkParam         config of spark environment (must)
+ * @param sinkParams         config of sink ways (optional)
+ * @param checkpointParams   config of checkpoint locations (required in streaming mode)
+ */
 @JsonInclude(Include.NON_NULL)
-case class EnvConfig(@JsonProperty("spark") private val sparkParam: SparkParam,
-                     @JsonProperty("sinks") private val sinkParams: List[SinkParam],
-                     @JsonProperty("griffin.checkpoint") private val checkpointParams: List[CheckpointParam]
-                   ) extends Param {
+case class EnvConfig(
+    @JsonProperty("spark") private val sparkParam: SparkParam,
+    @JsonProperty("sinks") private val sinkParams: List[SinkParam],
+    @JsonProperty("griffin.checkpoint") private val checkpointParams: List[CheckpointParam])
+    extends Param {
   def getSparkParam: SparkParam = sparkParam
   def getSinkParams: Seq[SinkParam] = if (sinkParams != null) sinkParams else Nil
-  def getCheckpointParams: Seq[CheckpointParam] = if (checkpointParams != null) checkpointParams else Nil
+  def getCheckpointParams: Seq[CheckpointParam] =
+    if (checkpointParams != null) checkpointParams else Nil
 
   def validate(): Unit = {
-    assert((sparkParam != null), "spark param should not be null")
-    sparkParam.validate
-    getSinkParams.foreach(_.validate)
-    getCheckpointParams.foreach(_.validate)
+    assert(sparkParam != null, "spark param should not be null")
+    sparkParam.validate()
+    getSinkParams.foreach(_.validate())
+    getCheckpointParams.foreach(_.validate())
   }
 }
 
 /**
-  * spark param
-  * @param logLevel         log level of spark application (optional)
-  * @param cpDir            checkpoint directory for spark streaming (required in streaming mode)
-  * @param batchInterval    batch interval for spark streaming (required in streaming mode)
-  * @param processInterval  process interval for streaming dq calculation (required in streaming mode)
-  * @param config           extra config for spark environment (optional)
-  * @param initClear        clear checkpoint directory or not when initial (optional)
-  */
+ * spark param
+ * @param logLevel         log level of spark application (optional)
+ * @param cpDir            checkpoint directory for spark streaming (required in streaming mode)
+ * @param batchInterval    batch interval for spark streaming (required in streaming mode)
+ * @param processInterval  process interval for streaming dq calculation (required in streaming mode)
+ * @param config           extra config for spark environment (optional)
+ * @param initClear        clear checkpoint directory or not when initial (optional)
+ */
 @JsonInclude(Include.NON_NULL)
-case class SparkParam( @JsonProperty("log.level") private val logLevel: String,
-                       @JsonProperty("checkpoint.dir") private val cpDir: String,
-                       @JsonProperty("batch.interval") private val batchInterval: String,
-                       @JsonProperty("process.interval") private val processInterval: String,
-                       @JsonProperty("config") private val config: Map[String, String],
-                       @JsonProperty("init.clear") private val initClear: Boolean
-                     ) extends Param {
+case class SparkParam(
+    @JsonProperty("log.level") private val logLevel: String,
+    @JsonProperty("checkpoint.dir") private val cpDir: String,
+    @JsonProperty("batch.interval") private val batchInterval: String,
+    @JsonProperty("process.interval") private val processInterval: String,
+    @JsonProperty("config") private val config: Map[String, String],
+    @JsonProperty("init.clear") private val initClear: Boolean)
+    extends Param {
   def getLogLevel: String = if (logLevel != null) logLevel else "WARN"
   def getCpDir: String = if (cpDir != null) cpDir else ""
   def getBatchInterval: String = if (batchInterval != null) batchInterval else ""
@@ -79,14 +82,15 @@
 }
 
 /**
-  * sink param
-  * @param sinkType       sink type, e.g.: log, hdfs, http, mongo (must)
-  * @param config         config of sink way (must)
-  */
+ * sink param
+ * @param sinkType       sink type, e.g.: log, hdfs, http, mongo (must)
+ * @param config         config of sink way (must)
+ */
 @JsonInclude(Include.NON_NULL)
-case class SinkParam(@JsonProperty("type") private val sinkType: String,
-                     @JsonProperty("config") private val config: Map[String, Any]
-                    ) extends Param {
+case class SinkParam(
+    @JsonProperty("type") private val sinkType: String,
+    @JsonProperty("config") private val config: Map[String, Any])
+    extends Param {
   def getType: SinkType = SinkType.withNameWithDefault(sinkType)
   def getConfig: Map[String, Any] = if (config != null) config else Map[String, Any]()
 
@@ -96,14 +100,15 @@
 }
 
 /**
-  * checkpoint param
-  * @param cpType       checkpoint location type, e.g.: zookeeper (must)
-  * @param config       config of checkpoint location
-  */
+ * checkpoint param
+ * @param cpType       checkpoint location type, e.g.: zookeeper (must)
+ * @param config       config of checkpoint location
+ */
 @JsonInclude(Include.NON_NULL)
-case class CheckpointParam(@JsonProperty("type") private val cpType: String,
-                           @JsonProperty("config") private val config: Map[String, Any]
-                          ) extends Param {
+case class CheckpointParam(
+    @JsonProperty("type") private val cpType: String,
+    @JsonProperty("config") private val config: Map[String, Any])
+    extends Param {
   def getType: String = cpType
   def getConfig: Map[String, Any] = if (config != null) config else Map[String, Any]()
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/GriffinConfig.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/GriffinConfig.scala
index a29deab..c71ad11 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/GriffinConfig.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/GriffinConfig.scala
@@ -21,21 +21,22 @@
 import com.fasterxml.jackson.annotation.JsonInclude.Include
 
 /**
-  * full set of griffin configuration
-  * @param envConfig   environment configuration (must)
-  * @param dqConfig    dq measurement configuration (must)
-  */
+ * full set of griffin configuration
+ * @param envConfig   environment configuration (must)
+ * @param dqConfig    dq measurement configuration (must)
+ */
 @JsonInclude(Include.NON_NULL)
-case class GriffinConfig(@JsonProperty("env") private val envConfig: EnvConfig,
-                         @JsonProperty("dq") private val dqConfig: DQConfig
-                   ) extends Param {
+case class GriffinConfig(
+    @JsonProperty("env") private val envConfig: EnvConfig,
+    @JsonProperty("dq") private val dqConfig: DQConfig)
+    extends Param {
   def getEnvConfig: EnvConfig = envConfig
   def getDqConfig: DQConfig = dqConfig
 
   def validate(): Unit = {
-    assert((envConfig != null), "environment config should not be null")
-    assert((dqConfig != null), "dq config should not be null")
-    envConfig.validate
-    dqConfig.validate
+    assert(envConfig != null, "environment config should not be null")
+    assert(dqConfig != null, "dq config should not be null")
+    envConfig.validate()
+    dqConfig.validate()
   }
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/Param.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/Param.scala
index 577964a..090577b 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/Param.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/Param.scala
@@ -20,8 +20,8 @@
 trait Param extends Serializable {
 
   /**
-    * validate param internally
-    */
+   * validate param internally
+   */
   def validate(): Unit
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamFileReader.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamFileReader.scala
index 5ca1dd1..238b5fa 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamFileReader.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamFileReader.scala
@@ -23,22 +23,20 @@
 import org.apache.griffin.measure.configuration.dqdefinition.Param
 import org.apache.griffin.measure.utils.{HdfsUtil, JsonUtil}
 
-
-
 /**
-  * read params from config file path
+ * read params from config file path
  *
-  * @param filePath:  hdfs path ("hdfs://cluster-name/path")
-  *                   local file path ("file:///path")
-  *                   relative file path ("relative/path")
-  */
+ * @param filePath:  hdfs path ("hdfs://cluster-name/path")
+ *                   local file path ("file:///path")
+ *                   relative file path ("relative/path")
+ */
 case class ParamFileReader(filePath: String) extends ParamReader {
 
-  def readConfig[T <: Param](implicit m : ClassTag[T]): Try[T] = {
+  def readConfig[T <: Param](implicit m: ClassTag[T]): Try[T] = {
     Try {
       val source = HdfsUtil.openFile(filePath)
       val param = JsonUtil.fromJson[T](source)
-      source.close
+      source.close()
       validate(param)
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamJsonReader.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamJsonReader.scala
index f6c5488..75475c3 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamJsonReader.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamJsonReader.scala
@@ -23,15 +23,14 @@
 import org.apache.griffin.measure.configuration.dqdefinition.Param
 import org.apache.griffin.measure.utils.JsonUtil
 
-
 /**
-  * read params from json string directly
+ * read params from json string directly
  *
-  * @param jsonString
-  */
+ * @param jsonString
+ */
 case class ParamJsonReader(jsonString: String) extends ParamReader {
 
-  def readConfig[T <: Param](implicit m : ClassTag[T]): Try[T] = {
+  def readConfig[T <: Param](implicit m: ClassTag[T]): Try[T] = {
     Try {
       val param = JsonUtil.fromJson[T](jsonString)
       validate(param)
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamReader.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamReader.scala
index fba39ab..22da9de 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamReader.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamReader.scala
@@ -23,23 +23,22 @@
 import org.apache.griffin.measure.Loggable
 import org.apache.griffin.measure.configuration.dqdefinition.Param
 
-
 trait ParamReader extends Loggable with Serializable {
 
   /**
-    * read config param
- *
-    * @tparam T     param type expected
-    * @return       parsed param
-    */
-  def readConfig[T <: Param](implicit m : ClassTag[T]): Try[T]
+   * read config param
+   *
+   * @tparam T     param type expected
+   * @return       parsed param
+   */
+  def readConfig[T <: Param](implicit m: ClassTag[T]): Try[T]
 
   /**
-    * validate config param
- *
-    * @param param  param to be validated
-    * @return       param itself
-    */
+   * validate config param
+   *
+   * @param param  param to be validated
+   * @return       param itself
+   */
   protected def validate[T <: Param](param: T): T = {
     param.validate()
     param
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamReaderFactory.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamReaderFactory.scala
index 644c7de..5067a9d 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamReaderFactory.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamReaderFactory.scala
@@ -25,10 +25,10 @@
   val file = "file"
 
   /**
-    * parse string content to get param reader
-    * @param pathOrJson
-    * @return
-    */
+   * parse string content to get param reader
+   * @param pathOrJson
+   * @return
+   */
   def getParamReader(pathOrJson: String): ParamReader = {
     val strType = paramStrType(pathOrJson)
     if (json.equals(strType)) ParamJsonReader(pathOrJson)
@@ -40,7 +40,7 @@
       JsonUtil.toAnyMap(str)
       json
     } catch {
-      case e: Throwable => file
+      case _: Throwable => file
     }
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/DqType.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/DqType.scala
index 3176bad..94bc60f 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/DqType.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/DqType.scala
@@ -20,37 +20,36 @@
 import org.apache.griffin.measure.configuration.enums
 
 /**
-  * effective when dsl type is "griffin-dsl",
-  * indicates the dq type of griffin pre-defined measurements
-  * <li>{@link #Accuracy} - The match percentage of items between source and target
-  *                         count(source items matched with the ones from target) / count(source)
-  *                         e.g.: source [1, 2, 3, 4, 5], target: [1, 2, 3, 4]
-  *                         metric will be: { "total": 5, "miss": 1, "matched": 4 } accuracy is 80%.</li>
-  * <li>{@link #Profiling} - The statistic data of data source
-  *                          e.g.: max, min, average, group by count, ...</li>
-  * <li>{@link #Uniqueness} - The uniqueness of data source comparing with itself
-  *                           count(unique items in source) / count(source)
-  *                           e.g.: [1, 2, 3, 3] -> { "unique": 2, "total": 4, "dup-arr": [ "dup": 1, "num": 1 ] }
-  *                           uniqueness indicates the items without any replica of data</li>
-  * <li>{@link #Distinctness} - The distinctness of data source comparing with itself
-  *                             count(distinct items in source) / count(source)
-  *                             e.g.: [1, 2, 3, 3] -> { "dist": 3, "total": 4, "dup-arr": [ "dup": 1, "num": 1 ] }
-  *                             distinctness indicates the valid information of data
-  *                             comparing with uniqueness, distinctness is more meaningful</li>
-  * <li>{@link #Timeliness} - The latency of data source with timestamp information
-  *                           e.g.: (receive_time - send_time)
-  *                           timeliness can get the statistic metric of latency, like average, max, min,
-  *                            percentile-value,
-  *                           even more, it can record the items with latency above threshold you configured</li>
-  * <li>{@link #Completeness} - The completeness of data source
-  *                             the columns you measure is incomplete if it is null</li>
-  */
+ * effective when dsl type is "griffin-dsl",
+ * indicates the dq type of griffin pre-defined measurements
+ * <li> - The match percentage of items between source and target
+ *                         count(source items matched with the ones from target) / count(source)
+ *                         e.g.: source [1, 2, 3, 4, 5], target: [1, 2, 3, 4]
+ *                         metric will be: { "total": 5, "miss": 1, "matched": 4 } accuracy is 80%.</li>
+ * <li> - The statistic data of data source
+ *                          e.g.: max, min, average, group by count, ...</li>
+ * <li> - The uniqueness of data source comparing with itself
+ *                           count(unique items in source) / count(source)
+ *                           e.g.: [1, 2, 3, 3] -> { "unique": 2, "total": 4, "dup-arr": [ "dup": 1, "num": 1 ] }
+ *                           uniqueness indicates the items without any replica of data</li>
+ * <li> - The distinctness of data source comparing with itself
+ *                             count(distinct items in source) / count(source)
+ *                             e.g.: [1, 2, 3, 3] -> { "dist": 3, "total": 4, "dup-arr": [ "dup": 1, "num": 1 ] }
+ *                             distinctness indicates the valid information of data
+ *                             comparing with uniqueness, distinctness is more meaningful</li>
+ * <li> - The latency of data source with timestamp information
+ *                           e.g.: (receive_time - send_time)
+ *                           timeliness can get the statistic metric of latency, like average, max, min,
+ *                            percentile-value,
+ *                           even more, it can record the items with latency above threshold you configured</li>
+ * <li> - The completeness of data source
+ *                             the columns you measure is incomplete if it is null</li>
+ */
 object DqType extends GriffinEnum {
 
   type DqType = Value
 
-  val Accuracy, Profiling, Uniqueness, Duplicate, Distinct, Timeliness,
-  Completeness = Value
+  val Accuracy, Profiling, Uniqueness, Duplicate, Distinct, Timeliness, Completeness = Value
 
   override def withNameWithDefault(name: String): enums.DqType.Value = {
     val dqType = super.withNameWithDefault(name)
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/DslType.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/DslType.scala
index 1b66f35..58981a7 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/DslType.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/DslType.scala
@@ -20,21 +20,21 @@
 import org.apache.griffin.measure.configuration.enums
 
 /**
-  * dsl type indicates the language type of rule param
-  * <li>{@link #SparkSql} - spark-sql: rule defined in "SPARK-SQL" directly</li>
-  * <li>{@link #DfOps} - df-ops|df-opr|: data frame operations rule, support some pre-defined data frame ops()</li>
-  * <li>{@link #GriffinDsl} - griffin dsl rule, to define dq measurements easier</li>
-  */
+ * dsl type indicates the language type of rule param
+ * <li> - spark-sql: rule defined in "SPARK-SQL" directly</li>
+ * <li> - df-ops|df-opr|: data frame operations rule, support some pre-defined data frame ops()</li>
+ * <li> - griffin dsl rule, to define dq measurements easier</li>
+ */
 object DslType extends GriffinEnum {
   type DslType = Value
 
   val SparkSql, DfOps, DfOpr, DfOperations, GriffinDsl, DataFrameOpsType = Value
 
   /**
-    *
-    * @param name Dsltype from config file
-    * @return Enum value corresponding to string
-    */
+   *
+   * @param name Dsltype from config file
+   * @return Enum value corresponding to string
+   */
   def withNameWithDslType(name: String): Value =
     values
       .find(_.toString.toLowerCase == name.replace("-", "").toLowerCase())
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/FlattenType.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/FlattenType.scala
index 2586268..78e8889 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/FlattenType.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/FlattenType.scala
@@ -18,31 +18,31 @@
 package org.apache.griffin.measure.configuration.enums
 
 /**
-  * the strategy to flatten metric
-  *  <li>{@link #DefaultFlattenType} -  default flatten strategy
-  *                                     metrics contains 1 row -> flatten metric json map
-  *                                     metrics contains n > 1 rows -> flatten metric json array
-  *                                     n = 0: { }
-  *                                     n = 1: { "col1": "value1", "col2": "value2", ... }
-  *                                     n > 1: { "arr-name": [ { "col1": "value1", "col2": "value2", ... }, ... ] }
-  *                                     all rows
-  *  </li>
-  *  <li>{@link #EntriesFlattenType} - metrics contains n rows -> flatten metric json map
-  *                                    n = 0: { }
-  *                                    n >= 1: { "col1": "value1", "col2": "value2", ... }
-  *                                    the first row only
-  *  </li>
-  *  <li>{@link #ArrayFlattenType} -   metrics contains n rows -> flatten metric json array
-  *                                    n = 0: { "arr-name": [ ] }
-  *                                    n >= 1: { "arr-name": [ { "col1": "value1", "col2": "value2", ... }, ... ] }
-  *                                    all rows
-  *  </li>
-  *  <li>{@link #MapFlattenType} - metrics contains n rows -> flatten metric json wrapped map
-  *                                n = 0: { "map-name": { } }
-  *                                n >= 1: { "map-name": { "col1": "value1", "col2": "value2", ... } }
-  *                                the first row only
-  *  </li>
-  */
+ * the strategy to flatten metric
+ *  <li> -  default flatten strategy
+ *                                     metrics contains 1 row -> flatten metric json map
+ *                                     metrics contains n > 1 rows -> flatten metric json array
+ *                                     n = 0: { }
+ *                                     n = 1: { "col1": "value1", "col2": "value2", ... }
+ *                                     n > 1: { "arr-name": [ { "col1": "value1", "col2": "value2", ... }, ... ] }
+ *                                     all rows
+ *  </li>
+ *  <li> - metrics contains n rows -> flatten metric json map
+ *                                    n = 0: { }
+ *                                    n >= 1: { "col1": "value1", "col2": "value2", ... }
+ *                                    the first row only
+ *  </li>
+ *  <li> -   metrics contains n rows -> flatten metric json array
+ *                                    n = 0: { "arr-name": [ ] }
+ *                                    n >= 1: { "arr-name": [ { "col1": "value1", "col2": "value2", ... }, ... ] }
+ *                                    all rows
+ *  </li>
+ *  <li> - metrics contains n rows -> flatten metric json wrapped map
+ *                                n = 0: { "map-name": { } }
+ *                                n >= 1: { "map-name": { "col1": "value1", "col2": "value2", ... } }
+ *                                the first row only
+ *  </li>
+ */
 object FlattenType extends GriffinEnum {
   type FlattenType = Value
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/GriffinEnum.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/GriffinEnum.scala
index b73742d..1c488fc 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/GriffinEnum.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/GriffinEnum.scala
@@ -20,13 +20,13 @@
 trait GriffinEnum extends Enumeration {
   type GriffinEnum = Value
 
-  val Unknown = Value
+  val Unknown: Value = Value
 
   /**
-    *
-    * @param name Constant value in String
-    * @return Enum constant value
-    */
+   *
+   * @param name Constant value in String
+   * @return Enum constant value
+   */
   def withNameWithDefault(name: String): Value =
     values
       .find(_.toString.toLowerCase == name.replace("-", "").toLowerCase())
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/OutputType.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/OutputType.scala
index dd0b2d1..813066b 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/OutputType.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/OutputType.scala
@@ -18,17 +18,16 @@
 package org.apache.griffin.measure.configuration.enums
 
 /**
-  * the strategy to output metric
-  *  <li>{@link #MetricOutputType} - output the rule step result as metric</li>
-  *  <li>{@link #RecordOutputType} - output the rule step result as records</li>
-  *  <li>{@link #DscUpdateOutputType} - output the rule step result to update data source cache</li>
-  *  <li>{@link #UnknownOutputType} - will not output the result </li>
-  */
+ * the strategy to output metric
+ *  <li> - output the rule step result as metric</li>
+ *  <li> - output the rule step result as records</li>
+ *  <li> - output the rule step result to update data source cache</li>
+ *  <li> - will not output the result </li>
+ */
 object OutputType extends GriffinEnum {
   type OutputType = Value
 
-  val MetricOutputType, RecordOutputType, DscUpdateOutputType,
-  UnknownOutputType = Value
+  val MetricOutputType, RecordOutputType, DscUpdateOutputType, UnknownOutputType = Value
 
   val Metric, Record, Records, DscUpdate = Value
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/ProcessType.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/ProcessType.scala
index 90e4ae9..b65e533 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/ProcessType.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/ProcessType.scala
@@ -17,14 +17,16 @@
 
 package org.apache.griffin.measure.configuration.enums
 
+import org.apache.griffin.measure.configuration.enums
+
 /**
-  * process type enum
-  *  <li>{@link #BatchProcessType} - Process in batch mode </li>
-  *  <li>{@link #StreamingProcessType} - Process in streaming mode</li>
-  */
+ * process type enum
+ *  <li> - Process in batch mode </li>
+ *  <li> - Process in streaming mode</li>
+ */
 object ProcessType extends GriffinEnum {
   type ProcessType = Value
 
-  val BatchProcessType = Value("Batch")
-  val StreamingProcessType = Value("Streaming")
+  val BatchProcessType: enums.ProcessType.Value = Value("Batch")
+  val StreamingProcessType: enums.ProcessType.Value = Value("Streaming")
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/SinkType.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/SinkType.scala
index 1476ae1..75d91d8 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/SinkType.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/SinkType.scala
@@ -20,15 +20,15 @@
 import org.apache.griffin.measure.configuration.enums
 
 /**
-  * Supported Sink types
-  *  <li>{@link #Console #Log} -  console sink, will sink metric in console (alias log)</li>
-  *  <li>{@link #Hdfs} - hdfs sink, will sink metric and record in hdfs</li>
-  *  <li>{@link #Es #Elasticsearch #Http} - elasticsearch sink, will sink metric
-  *  in elasticsearch (alias Es and Http)</li>
-  *  <li>{@link #Mongo #MongoDB} - mongo sink, will sink metric in mongo db (alias MongoDb)</li>
-  *  <li>{@link #Custom} - custom sink (needs using extra jar-file-extension)</li>
-  *  <li>{@link #Unknown} - </li>
-  */
+ * Supported Sink types
+ *  <li>{@link #Console #Log} -  console sink, will sink metric in console (alias log)</li>
+ *  <li>{@link #Hdfs} - hdfs sink, will sink metric and record in hdfs</li>
+ *  <li>{@link #Es #Elasticsearch #Http} - elasticsearch sink, will sink metric
+ *  in elasticsearch (alias Es and Http)</li>
+ *  <li>{@link #Mongo #MongoDB} - mongo sink, will sink metric in mongo db (alias MongoDb)</li>
+ *  <li>{@link #Custom} - custom sink (needs using extra jar-file-extension)</li>
+ *  <li>{@link #Unknown} - </li>
+ */
 object SinkType extends GriffinEnum {
   type SinkType = Value
 
@@ -40,7 +40,7 @@
       .map(s => SinkType.withNameWithDefault(s))
       .filter(_ != SinkType.Unknown)
       .distinct
-    if (seq.size > 0) seq else Seq(SinkType.ElasticSearch)
+    if (seq.nonEmpty) seq else Seq(SinkType.ElasticSearch)
   }
 
   override def withNameWithDefault(name: String): enums.SinkType.Value = {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/WriteMode.scala b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/WriteMode.scala
index 7057588..8f0795d 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/WriteMode.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/configuration/enums/WriteMode.scala
@@ -18,8 +18,8 @@
 package org.apache.griffin.measure.configuration.enums
 
 /**
-  * write mode when write metrics and records
-  */
+ * write mode when write metrics and records
+ */
 sealed trait WriteMode {}
 
 object WriteMode {
@@ -32,11 +32,11 @@
 }
 
 /**
-  * simple mode: write metrics and records directly
-  */
- case object SimpleMode extends WriteMode {}
+ * simple mode: write metrics and records directly
+ */
+case object SimpleMode extends WriteMode {}
 
 /**
-  * timestamp mode: write metrics and records with timestamp information
-  */
- case object TimestampMode extends WriteMode {}
+ * timestamp mode: write metrics and records with timestamp information
+ */
+case object TimestampMode extends WriteMode {}
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/ContextId.scala b/measure/src/main/scala/org/apache/griffin/measure/context/ContextId.scala
index a529d15..c6a74d7 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/ContextId.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/ContextId.scala
@@ -18,10 +18,10 @@
 package org.apache.griffin.measure.context
 
 /**
-  * context id, unique by different timestamp and tag
-  */
+ * context id, unique by different timestamp and tag
+ */
 case class ContextId(timestamp: Long, tag: String = "") extends Serializable {
   def id: String = {
-    if (tag.nonEmpty) s"${tag}_${timestamp}" else s"${timestamp}"
+    if (tag.nonEmpty) s"${tag}_$timestamp" else s"$timestamp"
   }
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/DQContext.scala b/measure/src/main/scala/org/apache/griffin/measure/context/DQContext.scala
index bea7f7d..805a0c5 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/DQContext.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/DQContext.scala
@@ -17,7 +17,7 @@
 
 package org.apache.griffin.measure.context
 
-import org.apache.spark.sql.{Encoders, SparkSession}
+import org.apache.spark.sql.{Encoder, Encoders, SparkSession}
 
 import org.apache.griffin.measure.configuration.dqdefinition._
 import org.apache.griffin.measure.configuration.enums.ProcessType._
@@ -26,16 +26,16 @@
 import org.apache.griffin.measure.sink.{Sink, SinkFactory}
 
 /**
-  * dq context: the context of each calculation
-  * unique context id in each calculation
-  * access the same spark session this app created
-  */
-case class DQContext(contextId: ContextId,
-                     name: String,
-                     dataSources: Seq[DataSource],
-                     sinkParams: Seq[SinkParam],
-                     procType: ProcessType
-                    )(@transient implicit val sparkSession: SparkSession) {
+ * dq context: the context of each calculation
+ * unique context id in each calculation
+ * access the same spark session this app created
+ */
+case class DQContext(
+    contextId: ContextId,
+    name: String,
+    dataSources: Seq[DataSource],
+    sinkParams: Seq[SinkParam],
+    procType: ProcessType)(@transient implicit val sparkSession: SparkSession) {
 
   val compileTableRegister: CompileTableRegister = CompileTableRegister()
   val runTimeTableRegister: RunTimeTableRegister = RunTimeTableRegister(sparkSession)
@@ -43,13 +43,14 @@
   val dataFrameCache: DataFrameCache = DataFrameCache()
 
   val metricWrapper: MetricWrapper = MetricWrapper(name, sparkSession.sparkContext.applicationId)
-  val writeMode = WriteMode.defaultMode(procType)
+  val writeMode: WriteMode = WriteMode.defaultMode(procType)
 
   val dataSourceNames: Seq[String] = {
     // sort data source names, put baseline data source name to the head
-    val (blOpt, others) = dataSources.foldLeft((None: Option[String], Nil: Seq[String])) { (ret, ds) =>
-      val (opt, seq) = ret
-      if (opt.isEmpty && ds.isBaseline) (Some(ds.name), seq) else (opt, seq :+ ds.name)
+    val (blOpt, others) = dataSources.foldLeft((None: Option[String], Nil: Seq[String])) {
+      (ret, ds) =>
+        val (opt, seq) = ret
+        if (opt.isEmpty && ds.isBaseline) (Some(ds.name), seq) else (opt, seq :+ ds.name)
     }
     blOpt match {
       case Some(bl) => bl +: others
@@ -62,10 +63,10 @@
     if (dataSourceNames.size > index) dataSourceNames(index) else ""
   }
 
-  implicit val encoder = Encoders.STRING
+  implicit val encoder: Encoder[String] = Encoders.STRING
   val functionNames: Seq[String] = sparkSession.catalog.listFunctions.map(_.name).collect.toSeq
 
-  val dataSourceTimeRanges = loadDataSources()
+  val dataSourceTimeRanges: Map[String, TimeRange] = loadDataSources()
 
   def loadDataSources(): Map[String, TimeRange] = {
     dataSources.map { ds =>
@@ -73,22 +74,22 @@
     }.toMap
   }
 
-  printTimeRanges
+  printTimeRanges()
 
   private val sinkFactory = SinkFactory(sinkParams, name)
   private val defaultSink: Sink = createSink(contextId.timestamp)
 
   def getSink(timestamp: Long): Sink = {
-    if (timestamp == contextId.timestamp) getSink()
+    if (timestamp == contextId.timestamp) getSink
     else createSink(timestamp)
   }
 
-  def getSink(): Sink = defaultSink
+  def getSink: Sink = defaultSink
 
   private def createSink(t: Long): Sink = {
     procType match {
-      case BatchProcessType => sinkFactory.getSinks(t, true)
-      case StreamingProcessType => sinkFactory.getSinks(t, false)
+      case BatchProcessType => sinkFactory.getSinks(t, block = true)
+      case StreamingProcessType => sinkFactory.getSinks(t, block = false)
     }
   }
 
@@ -106,11 +107,13 @@
 
   private def printTimeRanges(): Unit = {
     if (dataSourceTimeRanges.nonEmpty) {
-      val timeRangesStr = dataSourceTimeRanges.map { pair =>
-        val (name, timeRange) = pair
-        s"${name} -> (${timeRange.begin}, ${timeRange.end}]"
-      }.mkString(", ")
-      println(s"data source timeRanges: ${timeRangesStr}")
+      val timeRangesStr = dataSourceTimeRanges
+        .map { pair =>
+          val (name, timeRange) = pair
+          s"$name -> (${timeRange.begin}, ${timeRange.end}]"
+        }
+        .mkString(", ")
+      println(s"data source timeRanges: $timeRangesStr")
     }
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/DataFrameCache.scala b/measure/src/main/scala/org/apache/griffin/measure/context/DataFrameCache.scala
index 09e00e6..4bcae38 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/DataFrameCache.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/DataFrameCache.scala
@@ -17,19 +17,19 @@
 
 package org.apache.griffin.measure.context
 
-import scala.collection.mutable.{Map => MutableMap, MutableList}
+import scala.collection.mutable
 
 import org.apache.spark.sql.DataFrame
 
 import org.apache.griffin.measure.Loggable
 
 /**
-  * cache and unpersist dataframes
-  */
+ * cache and unpersist dataframes
+ */
 case class DataFrameCache() extends Loggable {
 
-  val dataFrames: MutableMap[String, DataFrame] = MutableMap()
-  val trashDataFrames: MutableList[DataFrame] = MutableList()
+  val dataFrames: mutable.Map[String, DataFrame] = mutable.Map()
+  val trashDataFrames: mutable.MutableList[DataFrame] = mutable.MutableList()
 
   private def trashDataFrame(df: DataFrame): Unit = {
     trashDataFrames += df
@@ -39,7 +39,7 @@
   }
 
   def cacheDataFrame(name: String, df: DataFrame): Unit = {
-    info(s"try to cache data frame ${name}")
+    info(s"try to cache data frame $name")
     dataFrames.get(name) match {
       case Some(odf) =>
         trashDataFrame(odf)
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/MetricWrapper.scala b/measure/src/main/scala/org/apache/griffin/measure/context/MetricWrapper.scala
index 7e266af..c503d6a 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/MetricWrapper.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/MetricWrapper.scala
@@ -20,8 +20,8 @@
 import scala.collection.mutable.{Map => MutableMap}
 
 /**
-  * wrap metrics into one, each calculation produces one metric map
-  */
+ * wrap metrics into one, each calculation produces one metric map
+ */
 case class MetricWrapper(name: String, applicationId: String) extends Serializable {
 
   val _Name = "name"
@@ -42,12 +42,13 @@
   def flush: Map[Long, Map[String, Any]] = {
     metrics.toMap.map { pair =>
       val (timestamp, value) = pair
-      (timestamp, Map[String, Any](
-        (_Name -> name),
-        (_Timestamp -> timestamp),
-        (_Value -> value),
-        (_Metadata -> Map("applicationId" -> applicationId))
-      ))
+      (
+        timestamp,
+        Map[String, Any](
+          _Name -> name,
+          _Timestamp -> timestamp,
+          _Value -> value,
+          _Metadata -> Map("applicationId" -> applicationId)))
     }
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/TableRegister.scala b/measure/src/main/scala/org/apache/griffin/measure/context/TableRegister.scala
index 0190d09..f272574 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/TableRegister.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/TableRegister.scala
@@ -23,10 +23,9 @@
 
 import org.apache.griffin.measure.Loggable
 
-
 /**
-  * register table name
-  */
+ * register table name
+ */
 trait TableRegister extends Loggable with Serializable {
 
   protected val tables: MutableSet[String] = MutableSet()
@@ -46,20 +45,20 @@
     tables.clear
   }
 
-  def getTables(): Set[String] = {
+  def getTables: Set[String] = {
     tables.toSet
   }
 
 }
 
 /**
-  * register table name when building dq job
-  */
+ * register table name when building dq job
+ */
 case class CompileTableRegister() extends TableRegister {}
 
 /**
-  * register table name and create temp view during calculation
-  */
+ * register table name and create temp view during calculation
+ */
 case class RunTimeTableRegister(@transient sparkSession: SparkSession) extends TableRegister {
 
   def registerTable(name: String, df: DataFrame): Unit = {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/TimeRange.scala b/measure/src/main/scala/org/apache/griffin/measure/context/TimeRange.scala
index 55e637b..6662468 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/TimeRange.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/TimeRange.scala
@@ -33,8 +33,9 @@
 }
 
 object TimeRange {
-  val emptyTimeRange = TimeRange(0, 0, Set[Long]())
-  def apply(range: (Long, Long), tmsts: Set[Long]): TimeRange = TimeRange(range._1, range._2, tmsts)
+  val emptyTimeRange: TimeRange = TimeRange(0, 0, Set[Long]())
+  def apply(range: (Long, Long), tmsts: Set[Long]): TimeRange =
+    TimeRange(range._1, range._2, tmsts)
   def apply(ts: Long, tmsts: Set[Long]): TimeRange = TimeRange(ts, ts, tmsts)
   def apply(ts: Long): TimeRange = TimeRange(ts, ts, Set[Long](ts))
   def apply(tmsts: Set[Long]): TimeRange = {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLock.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLock.scala
index 095699b..f941108 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLock.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLock.scala
@@ -22,8 +22,8 @@
 import org.apache.griffin.measure.Loggable
 
 /**
-  * lock for checkpoint
-  */
+ * lock for checkpoint
+ */
 trait CheckpointLock extends Loggable with Serializable {
 
   def lock(outtime: Long, unit: TimeUnit): Boolean
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLockInZK.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLockInZK.scala
index dedf028..e5c3df5 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLockInZK.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLockInZK.scala
@@ -40,7 +40,7 @@
 
   def unlock(): Unit = {
     try {
-      if (mutex.isAcquiredInThisProcess) mutex.release
+      if (mutex.isAcquiredInThisProcess) mutex.release()
     } catch {
       case e: Throwable =>
         error(s"unlock error: ${e.getMessage}")
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLockSeq.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLockSeq.scala
index ed72361..0d37e79 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLockSeq.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/lock/CheckpointLockSeq.scala
@@ -22,11 +22,11 @@
 case class CheckpointLockSeq(locks: Seq[CheckpointLock]) extends CheckpointLock {
 
   def lock(outtime: Long, unit: TimeUnit): Boolean = {
-    locks.headOption.map(_.lock(outtime, unit)).getOrElse(true)
+    locks.headOption.forall(_.lock(outtime, unit))
   }
 
   def unlock(): Unit = {
-    locks.headOption.foreach(_.unlock)
+    locks.headOption.foreach(_.unlock())
   }
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointClient.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointClient.scala
index c301b4c..27b56a3 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointClient.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointClient.scala
@@ -18,19 +18,22 @@
 package org.apache.griffin.measure.context.streaming.checkpoint.offset
 
 import org.apache.griffin.measure.configuration.dqdefinition.CheckpointParam
-import org.apache.griffin.measure.context.streaming.checkpoint.lock.{CheckpointLock, CheckpointLockSeq}
+import org.apache.griffin.measure.context.streaming.checkpoint.lock.{
+  CheckpointLock,
+  CheckpointLockSeq
+}
 
 object OffsetCheckpointClient extends OffsetCheckpoint with OffsetOps {
   var offsetCheckpoints: Seq[OffsetCheckpoint] = Nil
 
-  def initClient(checkpointParams: Iterable[CheckpointParam], metricName: String) : Unit = {
+  def initClient(checkpointParams: Iterable[CheckpointParam], metricName: String): Unit = {
     val fac = OffsetCheckpointFactory(checkpointParams, metricName)
     offsetCheckpoints = checkpointParams.flatMap(param => fac.getOffsetCheckpoint(param)).toList
   }
 
-  def init(): Unit = offsetCheckpoints.foreach(_.init)
+  def init(): Unit = offsetCheckpoints.foreach(_.init())
   def available(): Boolean = offsetCheckpoints.foldLeft(false)(_ || _.available)
-  def close(): Unit = offsetCheckpoints.foreach(_.close)
+  def close(): Unit = offsetCheckpoints.foreach(_.close())
 
   def cache(kvs: Map[String, String]): Unit = {
     offsetCheckpoints.foreach(_.cache(kvs))
@@ -40,11 +43,11 @@
     maps.fold(Map[String, String]())(_ ++ _)
   }
   def delete(keys: Iterable[String]): Unit = offsetCheckpoints.foreach(_.delete(keys))
-  def clear(): Unit = offsetCheckpoints.foreach(_.clear)
+  def clear(): Unit = offsetCheckpoints.foreach(_.clear())
 
   def listKeys(path: String): List[String] = {
     offsetCheckpoints.foldLeft(Nil: List[String]) { (res, offsetCheckpoint) =>
-      if (res.size > 0) res else offsetCheckpoint.listKeys(path)
+      if (res.nonEmpty) res else offsetCheckpoint.listKeys(path)
     }
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointFactory.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointFactory.scala
index 3eed5f9..3d6bcfc 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointFactory.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointFactory.scala
@@ -18,14 +18,16 @@
 package org.apache.griffin.measure.context.streaming.checkpoint.offset
 
 import scala.util.Try
+import scala.util.matching.Regex
 
 import org.apache.griffin.measure.configuration.dqdefinition.CheckpointParam
 
+case class OffsetCheckpointFactory(
+    checkpointParams: Iterable[CheckpointParam],
+    metricName: String)
+    extends Serializable {
 
-case class OffsetCheckpointFactory(checkpointParams: Iterable[CheckpointParam], metricName: String
-                                  ) extends Serializable {
-
-  val ZK_REGEX = """^(?i)zk|zookeeper$""".r
+  val ZK_REGEX: Regex = """^(?i)zk|zookeeper$""".r
 
   def getOffsetCheckpoint(checkpointParam: CheckpointParam): Option[OffsetCheckpoint] = {
     val config = checkpointParam.getConfig
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointInZK.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointInZK.scala
index b84ee5a..6a8b005 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointInZK.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetCheckpointInZK.scala
@@ -24,17 +24,18 @@
 import org.apache.curator.utils.ZKPaths
 import org.apache.zookeeper.CreateMode
 import scala.collection.JavaConverters._
+import scala.util.matching.Regex
 
 import org.apache.griffin.measure.context.streaming.checkpoint.lock.CheckpointLockInZK
 
-
 /**
-  * leverage zookeeper for info cache
-  * @param config
-  * @param metricName
-  */
+ * leverage zookeeper for info cache
+ * @param config
+ * @param metricName
+ */
 case class OffsetCheckpointInZK(config: Map[String, Any], metricName: String)
-  extends OffsetCheckpoint with OffsetOps {
+    extends OffsetCheckpoint
+    with OffsetOps {
 
   val Hosts = "hosts"
   val Namespace = "namespace"
@@ -43,35 +44,37 @@
   val CloseClear = "close.clear"
   val LockPath = "lock.path"
 
-  val PersistentRegex = """^(?i)persist(ent)?$""".r
-  val EphemeralRegex = """^(?i)ephemeral$""".r
+  val PersistentRegex: Regex = """^(?i)persist(ent)?$""".r
+  val EphemeralRegex: Regex = """^(?i)ephemeral$""".r
 
   final val separator = ZKPaths.PATH_SEPARATOR
 
-  val hosts = config.getOrElse(Hosts, "").toString
-  val namespace = config.getOrElse(Namespace, "").toString
+  val hosts: String = config.getOrElse(Hosts, "").toString
+  val namespace: String = config.getOrElse(Namespace, "").toString
   val mode: CreateMode = config.get(Mode) match {
-    case Some(s: String) => s match {
-      case PersistentRegex() => CreateMode.PERSISTENT
-      case EphemeralRegex() => CreateMode.EPHEMERAL
-      case _ => CreateMode.PERSISTENT
-    }
+    case Some(s: String) =>
+      s match {
+        case PersistentRegex() => CreateMode.PERSISTENT
+        case EphemeralRegex() => CreateMode.EPHEMERAL
+        case _ => CreateMode.PERSISTENT
+      }
     case _ => CreateMode.PERSISTENT
   }
-  val initClear = config.get(InitClear) match {
+  val initClear: Boolean = config.get(InitClear) match {
     case Some(b: Boolean) => b
     case _ => true
   }
-  val closeClear = config.get(CloseClear) match {
+  val closeClear: Boolean = config.get(CloseClear) match {
     case Some(b: Boolean) => b
     case _ => false
   }
-  val lockPath = config.getOrElse(LockPath, "lock").toString
+  val lockPath: String = config.getOrElse(LockPath, "lock").toString
 
   private val cacheNamespace: String =
     if (namespace.isEmpty) metricName else namespace + separator + metricName
 
-  private val builder = CuratorFrameworkFactory.builder()
+  private val builder = CuratorFrameworkFactory
+    .builder()
     .connectString(hosts)
     .retryPolicy(new ExponentialBackoffRetry(1000, 3))
     .namespace(cacheNamespace)
@@ -81,10 +84,10 @@
     client.start()
     info("start zk info cache")
     client.usingNamespace(cacheNamespace)
-    info(s"init with namespace: ${cacheNamespace}")
+    info(s"init with namespace: $cacheNamespace")
     delete(lockPath :: Nil)
     if (initClear) {
-      clear
+      clear()
     }
   }
 
@@ -97,7 +100,7 @@
 
   def close(): Unit = {
     if (closeClear) {
-      clear
+      clear()
     }
     info("close zk info cache")
     client.close()
@@ -117,7 +120,9 @@
   }
 
   def delete(keys: Iterable[String]): Unit = {
-    keys.foreach { key => delete(path(key)) }
+    keys.foreach { key =>
+      delete(path(key))
+    }
   }
 
   def clear(): Unit = {
@@ -142,10 +147,10 @@
 
   private def children(path: String): List[String] = {
     try {
-      client.getChildren().forPath(path).asScala.toList
+      client.getChildren.forPath(path).asScala.toList
     } catch {
       case e: Throwable =>
-        warn(s"list ${path} warn: ${e.getMessage}")
+        warn(s"list $path warn: ${e.getMessage}")
         Nil
     }
   }
@@ -160,12 +165,15 @@
 
   private def create(path: String, content: String): Boolean = {
     try {
-      client.create().creatingParentsIfNeeded().withMode(mode)
+      client
+        .create()
+        .creatingParentsIfNeeded()
+        .withMode(mode)
         .forPath(path, content.getBytes("utf-8"))
       true
     } catch {
       case e: Throwable =>
-        error(s"create ( ${path} -> ${content} ) error: ${e.getMessage}")
+        error(s"create ( $path -> $content ) error: ${e.getMessage}")
         false
     }
   }
@@ -176,17 +184,17 @@
       true
     } catch {
       case e: Throwable =>
-        error(s"update ( ${path} -> ${content} ) error: ${e.getMessage}")
+        error(s"update ( $path -> $content ) error: ${e.getMessage}")
         false
     }
   }
 
   private def read(path: String): Option[String] = {
     try {
-      Some(new String(client.getData().forPath(path), "utf-8"))
+      Some(new String(client.getData.forPath(path), "utf-8"))
     } catch {
       case e: Throwable =>
-        warn(s"read ${path} warn: ${e.getMessage}")
+        warn(s"read $path warn: ${e.getMessage}")
         None
     }
   }
@@ -195,7 +203,7 @@
     try {
       client.delete().guaranteed().deletingChildrenIfNeeded().forPath(path)
     } catch {
-      case e: Throwable => error(s"delete ${path} error: ${e.getMessage}")
+      case e: Throwable => error(s"delete $path error: ${e.getMessage}")
     }
   }
 
@@ -204,7 +212,7 @@
       client.checkExists().forPath(path) != null
     } catch {
       case e: Throwable =>
-        warn(s"check exists ${path} warn: ${e.getMessage}")
+        warn(s"check exists $path warn: ${e.getMessage}")
         false
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetOps.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetOps.scala
index 28f266a..7fd4907 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetOps.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/checkpoint/offset/OffsetOps.scala
@@ -25,74 +25,80 @@
   val CleanTime = "clean.time"
   val OldCacheIndex = "old.cache.index"
 
-  def cacheTime(path: String): String = s"${path}/${CacheTime}"
-  def lastProcTime(path: String): String = s"${path}/${LastProcTime}"
-  def readyTime(path: String): String = s"${path}/${ReadyTime}"
-  def cleanTime(path: String): String = s"${path}/${CleanTime}"
-  def oldCacheIndex(path: String): String = s"${path}/${OldCacheIndex}"
+  def cacheTime(path: String): String = s"$path/$CacheTime"
+  def lastProcTime(path: String): String = s"$path/$LastProcTime"
+  def readyTime(path: String): String = s"$path/$ReadyTime"
+  def cleanTime(path: String): String = s"$path/$CleanTime"
+  def oldCacheIndex(path: String): String = s"$path/$OldCacheIndex"
 
   val infoPath = "info"
 
   val finalCacheInfoPath = "info.final"
-  val finalReadyTime = s"${finalCacheInfoPath}/${ReadyTime}"
-  val finalLastProcTime = s"${finalCacheInfoPath}/${LastProcTime}"
-  val finalCleanTime = s"${finalCacheInfoPath}/${CleanTime}"
+  val finalReadyTime = s"$finalCacheInfoPath/$ReadyTime"
+  val finalLastProcTime = s"$finalCacheInfoPath/$LastProcTime"
+  val finalCleanTime = s"$finalCacheInfoPath/$CleanTime"
 
   def startOffsetCheckpoint(): Unit = {
-    genFinalReadyTime
+    genFinalReadyTime()
   }
 
-  def getTimeRange(): (Long, Long) = {
-    readTimeRange
+  def getTimeRange: (Long, Long) = {
+    readTimeRange()
   }
 
-  def getCleanTime(): Long = {
-    readCleanTime
+  def getCleanTime: Long = {
+    readCleanTime()
   }
 
-  def endOffsetCheckpoint: Unit = {
-    genFinalLastProcTime
-    genFinalCleanTime
+  def endOffsetCheckpoint(): Unit = {
+    genFinalLastProcTime()
+    genFinalCleanTime()
   }
 
   private def genFinalReadyTime(): Unit = {
     val subPath = listKeys(infoPath)
-    val keys = subPath.map { p => s"${infoPath}/${p}/${ReadyTime}" }
+    val keys = subPath.map { p =>
+      s"$infoPath/$p/$ReadyTime"
+    }
     val result = read(keys)
     val times = keys.flatMap { k =>
       getLongOpt(result, k)
     }
     if (times.nonEmpty) {
       val time = times.min
-      val map = Map[String, String]((finalReadyTime -> time.toString))
+      val map = Map[String, String](finalReadyTime -> time.toString)
       cache(map)
     }
   }
 
   private def genFinalLastProcTime(): Unit = {
     val subPath = listKeys(infoPath)
-    val keys = subPath.map { p => s"${infoPath}/${p}/${LastProcTime}" }
+    val keys = subPath.map { p =>
+      s"$infoPath/$p/$LastProcTime"
+    }
     val result = read(keys)
     val times = keys.flatMap { k =>
       getLongOpt(result, k)
     }
     if (times.nonEmpty) {
       val time = times.min
-      val map = Map[String, String]((finalLastProcTime -> time.toString))
+      val map = Map[String, String](finalLastProcTime -> time.toString)
       cache(map)
     }
   }
 
   private def genFinalCleanTime(): Unit = {
     val subPath = listKeys(infoPath)
-    val keys = subPath.map { p => s"${infoPath}/${p}/${CleanTime}" }
+    val keys = subPath.map { p =>
+      s"$infoPath/$p/$CleanTime"
+    }
     val result = read(keys)
     val times = keys.flatMap { k =>
       getLongOpt(result, k)
     }
     if (times.nonEmpty) {
       val time = times.min
-      val map = Map[String, String]((finalCleanTime -> time.toString))
+      val map = Map[String, String](finalCleanTime -> time.toString)
       cache(map)
     }
   }
@@ -114,7 +120,7 @@
     try {
       map.get(key).map(_.toLong)
     } catch {
-      case e: Throwable => None
+      case _: Throwable => None
     }
   }
   private def getLong(map: Map[String, String], key: String) = {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/AccuracyMetric.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/AccuracyMetric.scala
index 6c86dd8..df816f0 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/AccuracyMetric.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/AccuracyMetric.scala
@@ -18,15 +18,15 @@
 package org.apache.griffin.measure.context.streaming.metric
 
 /**
-  * accuracy metric
-  * @param miss     miss count
-  * @param total    total count
-  */
+ * accuracy metric
+ * @param miss     miss count
+ * @param total    total count
+ */
 case class AccuracyMetric(miss: Long, total: Long) extends Metric {
 
   type T = AccuracyMetric
 
-  override def isLegal(): Boolean = getTotal > 0
+  override def isLegal: Boolean = getTotal > 0
 
   def update(delta: T): T = {
     if (delta.miss < miss) AccuracyMetric(delta.miss, total) else this
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/CacheResults.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/CacheResults.scala
index a3d9106..c6c66bd 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/CacheResults.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/CacheResults.scala
@@ -21,20 +21,19 @@
 
 import org.apache.griffin.measure.Loggable
 
-
 /**
-  * in streaming mode, some metrics may update,
-  * the old metrics are cached here
-  */
+ * in streaming mode, some metrics may update,
+ * the old metrics are cached here
+ */
 object CacheResults extends Loggable {
 
   case class CacheResult(timeStamp: Long, updateTime: Long, result: Metric) {
     def olderThan(ut: Long): Boolean = updateTime < ut
-    def update(ut: Long, r: Metric): Option[Metric] = {
+    def update[A <: result.T: Manifest](ut: Long, r: Metric): Option[Metric] = {
       r match {
-        case m: result.T if (olderThan(ut)) =>
+        case m: A if olderThan(ut) =>
           val ur = result.update(m)
-          if (result.differsFrom(ur)) Some(ur) else None
+          Some(ur).filter(result.differsFrom)
         case _ => None
       }
     }
@@ -47,8 +46,8 @@
   }
 
   /**
-    * input new metric results, output the updated metric results.
-    */
+   * input new metric results, output the updated metric results.
+   */
   def update(cacheResults: Iterable[CacheResult]): Iterable[CacheResult] = {
     val updatedCacheResults = cacheResults.flatMap { cacheResult =>
       val CacheResult(t, ut, r) = cacheResult
@@ -62,8 +61,8 @@
   }
 
   /**
-    * clean the out-time cached results, to avoid memory leak
-    */
+   * clean the out-time cached results, to avoid memory leak
+   */
   def refresh(overtime: Long): Unit = {
     val curCacheGroup = cacheGroup.toMap
     val deadCache = curCacheGroup.filter { pr =>
diff --git a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/Metric.scala b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/Metric.scala
index 64cc8e0..6d981d2 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/Metric.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/context/streaming/metric/Metric.scala
@@ -21,7 +21,7 @@
 
   type T <: Metric
 
-  def isLegal(): Boolean = true
+  def isLegal: Boolean = true
 
   def update(delta: T): T
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/DataSource.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/DataSource.scala
index 5c6c9d9..872deb1 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/DataSource.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/DataSource.scala
@@ -27,26 +27,28 @@
 import org.apache.griffin.measure.utils.DataFrameUtil._
 
 /**
-  * data source
-  * @param name     name of data source
-  * @param dsParam  param of this data source
-  * @param dataConnectors       list of data connectors
-  * @param streamingCacheClientOpt   streaming data cache client option
-  */
-case class DataSource(name: String,
-                      dsParam: DataSourceParam,
-                      dataConnectors: Seq[DataConnector],
-                      streamingCacheClientOpt: Option[StreamingCacheClient]
-                     ) extends Loggable with Serializable {
+ * data source
+ * @param name     name of data source
+ * @param dsParam  param of this data source
+ * @param dataConnectors       list of data connectors
+ * @param streamingCacheClientOpt   streaming data cache client option
+ */
+case class DataSource(
+    name: String,
+    dsParam: DataSourceParam,
+    dataConnectors: Seq[DataConnector],
+    streamingCacheClientOpt: Option[StreamingCacheClient])
+    extends Loggable
+    with Serializable {
 
   val isBaseline: Boolean = dsParam.isBaseline
 
   def init(): Unit = {
-    dataConnectors.foreach(_.init)
+    dataConnectors.foreach(_.init())
   }
 
   def loadData(context: DQContext): TimeRange = {
-    info(s"load data [${name}]")
+    info(s"load data [$name]")
     try {
       val timestamp = context.contextId.timestamp
       val (dfOpt, timeRange) = data(timestamp)
@@ -54,12 +56,12 @@
         case Some(df) =>
           context.runTimeTableRegister.registerTable(name, df)
         case None =>
-          warn(s"Data source [${name}] is null!")
+          warn(s"Data source [$name] is null!")
       }
       timeRange
     } catch {
-      case e =>
-        error(s"load data source [${name}] fails")
+      case e: Throwable =>
+        error(s"load data source [$name] fails")
         throw e
     }
   }
@@ -68,7 +70,7 @@
     val batches = dataConnectors.flatMap { dc =>
       val (dfOpt, timeRange) = dc.data(timestamp)
       dfOpt match {
-        case Some(df) => Some((dfOpt, timeRange))
+        case Some(_) => Some((dfOpt, timeRange))
         case _ => None
       }
     }
@@ -78,7 +80,7 @@
     }
     val pairs = batches ++ caches
 
-    if (pairs.size > 0) {
+    if (pairs.nonEmpty) {
       pairs.reduce { (a, b) =>
         (unionDfOpts(a._1, b._1), a._2.merge(b._2))
       }
@@ -92,11 +94,11 @@
   }
 
   def cleanOldData(): Unit = {
-    streamingCacheClientOpt.foreach(_.cleanOutTimeData)
+    streamingCacheClientOpt.foreach(_.cleanOutTimeData())
   }
 
   def processFinish(): Unit = {
-    streamingCacheClientOpt.foreach(_.processFinish)
+    streamingCacheClientOpt.foreach(_.processFinish())
   }
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/DataSourceFactory.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/DataSourceFactory.scala
index 436634d..67d9544 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/DataSourceFactory.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/DataSourceFactory.scala
@@ -27,38 +27,45 @@
 import org.apache.griffin.measure.datasource.cache.StreamingCacheClientFactory
 import org.apache.griffin.measure.datasource.connector.{DataConnector, DataConnectorFactory}
 
-
 object DataSourceFactory extends Loggable {
 
-  def getDataSources(sparkSession: SparkSession,
-                     ssc: StreamingContext,
-                     dataSources: Seq[DataSourceParam]
-                    ): Seq[DataSource] = {
+  def getDataSources(
+      sparkSession: SparkSession,
+      ssc: StreamingContext,
+      dataSources: Seq[DataSourceParam]): Seq[DataSource] = {
     dataSources.zipWithIndex.flatMap { pair =>
       val (param, index) = pair
       getDataSource(sparkSession, ssc, param, index)
     }
   }
 
-  private def getDataSource(sparkSession: SparkSession,
-                            ssc: StreamingContext,
-                            dataSourceParam: DataSourceParam,
-                            index: Int
-                           ): Option[DataSource] = {
+  private def getDataSource(
+      sparkSession: SparkSession,
+      ssc: StreamingContext,
+      dataSourceParam: DataSourceParam,
+      index: Int): Option[DataSource] = {
     val name = dataSourceParam.getName
     val connectorParams = dataSourceParam.getConnectors
     val timestampStorage = TimestampStorage()
 
     // for streaming data cache
     val streamingCacheClientOpt = StreamingCacheClientFactory.getClientOpt(
-      sparkSession, dataSourceParam.getCheckpointOpt, name, index, timestampStorage)
+      sparkSession,
+      dataSourceParam.getCheckpointOpt,
+      name,
+      index,
+      timestampStorage)
 
     val dataConnectors: Seq[DataConnector] = connectorParams.flatMap { connectorParam =>
-      DataConnectorFactory.getDataConnector(sparkSession, ssc, connectorParam,
-        timestampStorage, streamingCacheClientOpt) match {
-          case Success(connector) => Some(connector)
-          case _ => None
-        }
+      DataConnectorFactory.getDataConnector(
+        sparkSession,
+        ssc,
+        connectorParam,
+        timestampStorage,
+        streamingCacheClientOpt) match {
+        case Success(connector) => Some(connector)
+        case _ => None
+      }
     }
 
     Some(DataSource(name, dataSourceParam, dataConnectors, streamingCacheClientOpt))
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/TimestampStorage.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/TimestampStorage.scala
index b09cdb6..e4a962d 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/TimestampStorage.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/TimestampStorage.scala
@@ -20,26 +20,27 @@
 import scala.collection.mutable.{SortedSet => MutableSortedSet}
 
 import org.apache.griffin.measure.Loggable
+
 /**
-  * tmst cache, CRUD of timestamps
-  */
+ * tmst cache, CRUD of timestamps
+ */
 case class TimestampStorage() extends Loggable {
 
   private val tmstGroup: MutableSortedSet[Long] = MutableSortedSet.empty[Long]
 
   // -- insert tmst into tmst group --
-  def insert(tmst: Long) : MutableSortedSet[Long] = tmstGroup += tmst
-  def insert(tmsts: Iterable[Long]) : MutableSortedSet[Long] = tmstGroup ++= tmsts
+  def insert(tmst: Long): MutableSortedSet[Long] = tmstGroup += tmst
+  def insert(tmsts: Iterable[Long]): MutableSortedSet[Long] = tmstGroup ++= tmsts
 
   // -- remove tmst from tmst group --
-  def remove(tmst: Long) : MutableSortedSet[Long] = tmstGroup -= tmst
-  def remove(tmsts: Iterable[Long]) : MutableSortedSet[Long] = tmstGroup --= tmsts
+  def remove(tmst: Long): MutableSortedSet[Long] = tmstGroup -= tmst
+  def remove(tmsts: Iterable[Long]): MutableSortedSet[Long] = tmstGroup --= tmsts
 
   // -- get subset of tmst group --
-  def fromUntil(from: Long, until: Long) : Set[Long] = tmstGroup.range(from, until).toSet
-  def afterTil(after: Long, til: Long) : Set[Long] = tmstGroup.range(after + 1, til + 1).toSet
-  def until(until: Long) : Set[Long] = tmstGroup.until(until).toSet
-  def from(from: Long) : Set[Long] = tmstGroup.from(from).toSet
-  def all : Set[Long] = tmstGroup.toSet
+  def fromUntil(from: Long, until: Long): Set[Long] = tmstGroup.range(from, until).toSet
+  def afterTil(after: Long, til: Long): Set[Long] = tmstGroup.range(after + 1, til + 1).toSet
+  def until(until: Long): Set[Long] = tmstGroup.until(until).toSet
+  def from(from: Long): Set[Long] = tmstGroup.from(from).toSet
+  def all: Set[Long] = tmstGroup.toSet
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheClient.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheClient.scala
index dae6e52..04775c7 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheClient.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheClient.scala
@@ -19,12 +19,14 @@
 
 import java.util.concurrent.TimeUnit
 
+import scala.collection.mutable
 import scala.util.Random
 
 import org.apache.spark.sql._
 
 import org.apache.griffin.measure.Loggable
 import org.apache.griffin.measure.context.TimeRange
+import org.apache.griffin.measure.context.streaming.checkpoint.lock.CheckpointLock
 import org.apache.griffin.measure.context.streaming.checkpoint.offset.OffsetCheckpointClient
 import org.apache.griffin.measure.datasource.TimestampStorage
 import org.apache.griffin.measure.step.builder.ConstantColumns
@@ -32,15 +34,17 @@
 import org.apache.griffin.measure.utils.DataFrameUtil._
 import org.apache.griffin.measure.utils.ParamUtil._
 
-
 /**
-  * data source cache in streaming mode
-  * save data frame into hdfs in dump phase
-  * read data frame from hdfs in calculate phase
-  * with update and clean actions for the cache data
-  */
+ * data source cache in streaming mode
+ * save data frame into hdfs in dump phase
+ * read data frame from hdfs in calculate phase
+ * with update and clean actions for the cache data
+ */
 trait StreamingCacheClient
-  extends StreamingOffsetCacheable with WithFanIn[Long] with Loggable with Serializable {
+    extends StreamingOffsetCacheable
+    with WithFanIn[Long]
+    with Loggable
+    with Serializable {
 
   val sparkSession: SparkSession
   val param: Map[String, Any]
@@ -48,16 +52,17 @@
   val index: Int
 
   val timestampStorage: TimestampStorage
-  protected def fromUntilRangeTmsts(from: Long, until: Long) =
+  protected def fromUntilRangeTmsts(from: Long, until: Long): Set[Long] =
     timestampStorage.fromUntil(from, until)
 
-  protected def clearTmst(t: Long) = timestampStorage.remove(t)
-  protected def clearTmstsUntil(until: Long) = {
+  protected def clearTmst(t: Long): mutable.SortedSet[Long] = timestampStorage.remove(t)
+  protected def clearTmstsUntil(until: Long): mutable.SortedSet[Long] = {
     val outDateTmsts = timestampStorage.until(until)
     timestampStorage.remove(outDateTmsts)
   }
-  protected def afterTilRangeTmsts(after: Long, til: Long) = fromUntilRangeTmsts(after + 1, til + 1)
-  protected def clearTmstsTil(til: Long) = clearTmstsUntil(til + 1)
+  protected def afterTilRangeTmsts(after: Long, til: Long): Set[Long] =
+    fromUntilRangeTmsts(after + 1, til + 1)
+  protected def clearTmstsTil(til: Long): mutable.SortedSet[Long] = clearTmstsUntil(til + 1)
 
   val _FilePath = "file.path"
   val _InfoPath = "info.path"
@@ -65,9 +70,9 @@
   val _ReadyTimeDelay = "ready.time.delay"
   val _TimeRange = "time.range"
 
-  val rdmStr = Random.alphanumeric.take(10).mkString
-  val defFilePath = s"hdfs:///griffin/cache/${dsName}_${rdmStr}"
-  val defInfoPath = s"${index}"
+  val rdmStr: String = Random.alphanumeric.take(10).mkString
+  val defFilePath = s"hdfs:///griffin/cache/${dsName}_$rdmStr"
+  val defInfoPath = s"$index"
 
   val filePath: String = param.getString(_FilePath, defFilePath)
   val cacheInfoPath: String = param.getString(_InfoPath, defInfoPath)
@@ -77,11 +82,11 @@
   val readyTimeDelay: Long =
     TimeUtil.milliseconds(param.getString(_ReadyTimeDelay, "1m")).getOrElse(60000L)
 
-  val deltaTimeRange: (Long, Long) = {
+  def deltaTimeRange[T <: Seq[String]: Manifest]: (Long, Long) = {
     def negative(n: Long): Long = if (n <= 0) n else 0
     param.get(_TimeRange) match {
-      case Some(seq: Seq[String]) =>
-        val nseq = seq.flatMap(TimeUtil.milliseconds(_))
+      case Some(seq: T) =>
+        val nseq = seq.flatMap(TimeUtil.milliseconds)
         val ns = negative(nseq.headOption.getOrElse(0))
         val ne = negative(nseq.tail.headOption.getOrElse(0))
         (ns, ne)
@@ -90,16 +95,16 @@
   }
 
   val _ReadOnly = "read.only"
-  val readOnly = param.getBoolean(_ReadOnly, false)
+  val readOnly: Boolean = param.getBoolean(_ReadOnly, defValue = false)
 
   val _Updatable = "updatable"
-  val updatable = param.getBoolean(_Updatable, false)
+  val updatable: Boolean = param.getBoolean(_Updatable, defValue = false)
 
-  val newCacheLock = OffsetCheckpointClient.genLock(s"${cacheInfoPath}.new")
-  val oldCacheLock = OffsetCheckpointClient.genLock(s"${cacheInfoPath}.old")
+  val newCacheLock: CheckpointLock = OffsetCheckpointClient.genLock(s"$cacheInfoPath.new")
+  val oldCacheLock: CheckpointLock = OffsetCheckpointClient.genLock(s"$cacheInfoPath.old")
 
-  val newFilePath = s"${filePath}/new"
-  val oldFilePath = s"${filePath}/old"
+  val newFilePath = s"$filePath/new"
+  val oldFilePath = s"$filePath/old"
 
   val defOldCacheIndex = 0L
 
@@ -112,10 +117,10 @@
   }
 
   /**
-    * save data frame in dump phase
-    * @param dfOpt    data frame to be saved
-    * @param ms       timestamp of this data frame
-    */
+   * save data frame in dump phase
+   * @param dfOpt    data frame to be saved
+   * @param ms       timestamp of this data frame
+   */
   def saveData(dfOpt: Option[DataFrame], ms: Long): Unit = {
     if (!readOnly) {
       dfOpt match {
@@ -125,7 +130,7 @@
 
           // cache df
           val cnt = df.count
-          info(s"save ${dsName} data count: ${cnt}")
+          info(s"save $dsName data count: $cnt")
 
           if (cnt > 0) {
             // lock makes it safer when writing new cache data
@@ -150,7 +155,7 @@
 
       // submit cache time and ready time
       if (fanIncrement(ms)) {
-        info(s"save data [${ms}] finish")
+        info(s"save data [$ms] finish")
         submitCacheTime(ms)
         submitReadyTime(ms)
       }
@@ -159,9 +164,9 @@
   }
 
   /**
-    * read data frame in calculation phase
-    * @return   data frame to calculate, with the time range of data
-    */
+   * read data frame in calculation phase
+   * @return   data frame to calculate, with the time range of data
+   */
   def readData(): (Option[DataFrame], TimeRange) = {
     // time range: (a, b]
     val timeRange = OffsetCheckpointClient.getTimeRange
@@ -189,9 +194,9 @@
     }
 
     // old cache data
-    val oldCacheIndexOpt = if (updatable) readOldCacheIndex else None
+    val oldCacheIndexOpt = if (updatable) readOldCacheIndex() else None
     val oldDfOpt = oldCacheIndexOpt.flatMap { idx =>
-      val oldDfPath = s"${oldFilePath}/${idx}"
+      val oldDfPath = s"$oldFilePath/$idx"
       try {
         val dfr = sparkSession.read
         readDataFrameOpt(dfr, oldDfPath).map(_.filter(filterStr))
@@ -213,62 +218,71 @@
     (cacheDfOpt, retTimeRange)
   }
 
-  private def cleanOutTimePartitions(path: String, outTime: Long, partitionOpt: Option[String],
-                                     func: (Long, Long) => Boolean
-                                    ): Unit = {
+  private def cleanOutTimePartitions(
+      path: String,
+      outTime: Long,
+      partitionOpt: Option[String],
+      func: (Long, Long) => Boolean): Unit = {
     val earlierOrEqPaths = listPartitionsByFunc(path: String, outTime, partitionOpt, func)
     // delete out time data path
     earlierOrEqPaths.foreach { path =>
-      info(s"delete hdfs path: ${path}")
+      info(s"delete hdfs path: $path")
       HdfsUtil.deleteHdfsPath(path)
     }
   }
-  private def listPartitionsByFunc(path: String, bound: Long, partitionOpt: Option[String],
-                                        func: (Long, Long) => Boolean
-                                       ): Iterable[String] = {
+  private def listPartitionsByFunc(
+      path: String,
+      bound: Long,
+      partitionOpt: Option[String],
+      func: (Long, Long) => Boolean): Iterable[String] = {
     val names = HdfsUtil.listSubPathsByType(path, "dir")
     val regex = partitionOpt match {
-      case Some(partition) => s"^${partition}=(\\d+)$$".r
+      case Some(partition) => s"^$partition=(\\d+)$$".r
       case _ => "^(\\d+)$".r
     }
-    names.filter { name =>
-      name match {
-        case regex(value) =>
-          str2Long(value) match {
-            case Some(t) => func(t, bound)
-            case _ => false
-          }
-        case _ => false
+    names
+      .filter { name =>
+        name match {
+          case regex(value) =>
+            str2Long(value) match {
+              case Some(t) => func(t, bound)
+              case _ => false
+            }
+          case _ => false
+        }
       }
-    }.map(name => s"${path}/${name}")
+      .map(name => s"$path/$name")
   }
   private def str2Long(str: String): Option[Long] = {
     try {
       Some(str.toLong)
     } catch {
-      case e: Throwable => None
+      case _: Throwable => None
     }
   }
 
   /**
-    * clean out-time cached data on hdfs
-    */
+   * clean out-time cached data on hdfs
+   */
   def cleanOutTimeData(): Unit = {
     // clean tmst
-    val cleanTime = readCleanTime
-    cleanTime.foreach(clearTmstsTil(_))
+    val cleanTime = readCleanTime()
+    cleanTime.foreach(clearTmstsTil)
 
     if (!readOnly) {
       // new cache data
-      val newCacheCleanTime = if (updatable) readLastProcTime else readCleanTime
+      val newCacheCleanTime = if (updatable) readLastProcTime() else readCleanTime()
       newCacheCleanTime match {
         case Some(nct) =>
           // clean calculated new cache data
           val newCacheLocked = newCacheLock.lock(-1, TimeUnit.SECONDS)
           if (newCacheLocked) {
             try {
-              cleanOutTimePartitions(newFilePath, nct, Some(ConstantColumns.tmst),
-                (a: Long, b: Long) => (a <= b))
+              cleanOutTimePartitions(
+                newFilePath,
+                nct,
+                Some(ConstantColumns.tmst),
+                (a: Long, b: Long) => a <= b)
             } catch {
               case e: Throwable => error(s"clean new cache data error: ${e.getMessage}")
             } finally {
@@ -281,17 +295,16 @@
       }
 
       // old cache data
-      val oldCacheCleanTime = if (updatable) readCleanTime else None
+      val oldCacheCleanTime = if (updatable) readCleanTime() else None
       oldCacheCleanTime match {
-        case Some(oct) =>
-          val oldCacheIndexOpt = readOldCacheIndex
+        case Some(_) =>
+          val oldCacheIndexOpt = readOldCacheIndex()
           oldCacheIndexOpt.foreach { idx =>
-            val oldDfPath = s"${oldFilePath}/${idx}"
             val oldCacheLocked = oldCacheLock.lock(-1, TimeUnit.SECONDS)
             if (oldCacheLocked) {
               try {
                 // clean calculated old cache data
-                cleanOutTimePartitions(oldFilePath, idx, None, (a: Long, b: Long) => (a < b))
+                cleanOutTimePartitions(oldFilePath, idx, None, (a: Long, b: Long) => a < b)
                 // clean out time old cache data not calculated
 //                cleanOutTimePartitions(oldDfPath, oct, Some(InternalColumns.tmst))
               } catch {
@@ -309,9 +322,9 @@
   }
 
   /**
-    * update old cached data by new data frame
-    * @param dfOpt    data frame to update old cached data
-    */
+   * update old cached data by new data frame
+   * @param dfOpt    data frame to update old cached data
+   */
   def updateData(dfOpt: Option[DataFrame]): Unit = {
     if (!readOnly && updatable) {
       dfOpt match {
@@ -320,12 +333,12 @@
           val oldCacheLocked = oldCacheLock.lock(-1, TimeUnit.SECONDS)
           if (oldCacheLocked) {
             try {
-              val oldCacheIndexOpt = readOldCacheIndex
+              val oldCacheIndexOpt = readOldCacheIndex()
               val nextOldCacheIndex = oldCacheIndexOpt.getOrElse(defOldCacheIndex) + 1
 
-              val oldDfPath = s"${oldFilePath}/${nextOldCacheIndex}"
+              val oldDfPath = s"$oldFilePath/$nextOldCacheIndex"
               val cleanTime = getNextCleanTime
-              val filterStr = s"`${ConstantColumns.tmst}` > ${cleanTime}"
+              val filterStr = s"`${ConstantColumns.tmst}` > $cleanTime"
               val updateDf = df.filter(filterStr)
 
               val prlCount = sparkSession.sparkContext.defaultParallelism
@@ -348,9 +361,9 @@
   }
 
   /**
-    * each time calculation phase finishes,
-    * data source cache needs to submit some cache information
-    */
+   * each time calculation phase finishes,
+   * data source cache needs to submit some cache information
+   */
   def processFinish(): Unit = {
     // next last proc time
     val timeRange = OffsetCheckpointClient.getTimeRange
@@ -362,7 +375,7 @@
   }
 
   // read next clean time
-  private def getNextCleanTime(): Long = {
+  private def getNextCleanTime: Long = {
     val timeRange = OffsetCheckpointClient.getTimeRange
     val nextCleanTime = timeRange._2 + deltaTimeRange._1
     nextCleanTime
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheClientFactory.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheClientFactory.scala
index 8bc19de..8017b81 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheClientFactory.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheClientFactory.scala
@@ -17,6 +17,8 @@
 
 package org.apache.griffin.measure.datasource.cache
 
+import scala.util.matching.Regex
+
 import org.apache.spark.sql.SparkSession
 
 import org.apache.griffin.measure.Loggable
@@ -26,26 +28,29 @@
 object StreamingCacheClientFactory extends Loggable {
 
   private object DataSourceCacheType {
-    val ParquetRegex = "^(?i)parq(uet)?$".r
-    val JsonRegex = "^(?i)json$".r
-    val OrcRegex = "^(?i)orc$".r
+    val ParquetRegex: Regex = "^(?i)parq(uet)?$".r
+    val JsonRegex: Regex = "^(?i)json$".r
+    val OrcRegex: Regex = "^(?i)orc$".r
   }
   import DataSourceCacheType._
 
   val _type = "type"
 
   /**
-    * create streaming cache client
-    * @param sparkSession     sparkSession in spark environment
-    * @param checkpointOpt  data source checkpoint/cache config option
-    * @param name           data source name
-    * @param index          data source index
-    * @param tmstCache      the same tmstCache instance inside a data source
-    * @return               streaming cache client option
-    */
-  def getClientOpt(sparkSession: SparkSession, checkpointOpt: Option[Map[String, Any]],
-                   name: String, index: Int, tmstCache: TimestampStorage
-                  ): Option[StreamingCacheClient] = {
+   * create streaming cache client
+   * @param sparkSession     sparkSession in spark environment
+   * @param checkpointOpt  data source checkpoint/cache config option
+   * @param name           data source name
+   * @param index          data source index
+   * @param tmstCache      the same tmstCache instance inside a data source
+   * @return               streaming cache client option
+   */
+  def getClientOpt(
+      sparkSession: SparkSession,
+      checkpointOpt: Option[Map[String, Any]],
+      name: String,
+      index: Int,
+      tmstCache: TimestampStorage): Option[StreamingCacheClient] = {
     checkpointOpt.flatMap { param =>
       try {
         val tp = param.getString(_type, "")
@@ -61,7 +66,7 @@
         }
         Some(dsCache)
       } catch {
-        case e: Throwable =>
+        case _: Throwable =>
           error("generate data source cache fails")
           None
       }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheJsonClient.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheJsonClient.scala
index 03a54fb..4ef764d 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheJsonClient.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheJsonClient.scala
@@ -22,14 +22,18 @@
 import org.apache.griffin.measure.datasource.TimestampStorage
 
 /**
-  * data source cache in json format
-  */
-case class StreamingCacheJsonClient(sparkSession: SparkSession, param: Map[String, Any],
-                                    dsName: String, index: Int, timestampStorage: TimestampStorage
-                              ) extends StreamingCacheClient {
+ * data source cache in json format
+ */
+case class StreamingCacheJsonClient(
+    sparkSession: SparkSession,
+    param: Map[String, Any],
+    dsName: String,
+    index: Int,
+    timestampStorage: TimestampStorage)
+    extends StreamingCacheClient {
 
   protected def writeDataFrame(dfw: DataFrameWriter[Row], path: String): Unit = {
-    info(s"write path: ${path}")
+    info(s"write path: $path")
     dfw.json(path)
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheOrcClient.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheOrcClient.scala
index 3ce5f04..0440d3c 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheOrcClient.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheOrcClient.scala
@@ -22,14 +22,18 @@
 import org.apache.griffin.measure.datasource.TimestampStorage
 
 /**
-  * data source cache in orc format
-  */
-case class StreamingCacheOrcClient(sparkSession: SparkSession, param: Map[String, Any],
-                                   dsName: String, index: Int, timestampStorage: TimestampStorage
-                             ) extends StreamingCacheClient {
+ * data source cache in orc format
+ */
+case class StreamingCacheOrcClient(
+    sparkSession: SparkSession,
+    param: Map[String, Any],
+    dsName: String,
+    index: Int,
+    timestampStorage: TimestampStorage)
+    extends StreamingCacheClient {
 
   protected def writeDataFrame(dfw: DataFrameWriter[Row], path: String): Unit = {
-    info(s"write path: ${path}")
+    info(s"write path: $path")
     dfw.orc(path)
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheParquetClient.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheParquetClient.scala
index 26cfbd3..7775c69 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheParquetClient.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingCacheParquetClient.scala
@@ -22,19 +22,20 @@
 import org.apache.griffin.measure.datasource.TimestampStorage
 
 /**
-  * data source cache in parquet format
-  */
-case class StreamingCacheParquetClient(sparkSession: SparkSession,
-                                       param: Map[String, Any],
-                                       dsName: String,
-                                       index: Int,
-                                       timestampStorage: TimestampStorage
-                                 ) extends StreamingCacheClient {
+ * data source cache in parquet format
+ */
+case class StreamingCacheParquetClient(
+    sparkSession: SparkSession,
+    param: Map[String, Any],
+    dsName: String,
+    index: Int,
+    timestampStorage: TimestampStorage)
+    extends StreamingCacheClient {
 
   sparkSession.sparkContext.hadoopConfiguration.set("parquet.enable.summary-metadata", "false")
 
   protected def writeDataFrame(dfw: DataFrameWriter[Row], path: String): Unit = {
-    info(s"write path: ${path}")
+    info(s"write path: $path")
     dfw.parquet(path)
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingOffsetCacheable.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingOffsetCacheable.scala
index 86de0c8..9bd5e29 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingOffsetCacheable.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/StreamingOffsetCacheable.scala
@@ -21,37 +21,37 @@
 import org.apache.griffin.measure.context.streaming.checkpoint.offset.OffsetCheckpointClient
 
 /**
-  * timestamp offset of streaming data source cache
-  */
+ * timestamp offset of streaming data source cache
+ */
 trait StreamingOffsetCacheable extends Loggable with Serializable {
 
   val cacheInfoPath: String
   val readyTimeInterval: Long
   val readyTimeDelay: Long
 
-  def selfCacheInfoPath : String = s"${OffsetCheckpointClient.infoPath}/${cacheInfoPath}"
+  def selfCacheInfoPath: String = s"${OffsetCheckpointClient.infoPath}/$cacheInfoPath"
 
-  def selfCacheTime : String = OffsetCheckpointClient.cacheTime(selfCacheInfoPath)
-  def selfLastProcTime : String = OffsetCheckpointClient.lastProcTime(selfCacheInfoPath)
-  def selfReadyTime : String = OffsetCheckpointClient.readyTime(selfCacheInfoPath)
-  def selfCleanTime : String = OffsetCheckpointClient.cleanTime(selfCacheInfoPath)
-  def selfOldCacheIndex : String = OffsetCheckpointClient.oldCacheIndex(selfCacheInfoPath)
+  def selfCacheTime: String = OffsetCheckpointClient.cacheTime(selfCacheInfoPath)
+  def selfLastProcTime: String = OffsetCheckpointClient.lastProcTime(selfCacheInfoPath)
+  def selfReadyTime: String = OffsetCheckpointClient.readyTime(selfCacheInfoPath)
+  def selfCleanTime: String = OffsetCheckpointClient.cleanTime(selfCacheInfoPath)
+  def selfOldCacheIndex: String = OffsetCheckpointClient.oldCacheIndex(selfCacheInfoPath)
 
   protected def submitCacheTime(ms: Long): Unit = {
-    val map = Map[String, String]((selfCacheTime -> ms.toString))
+    val map = Map[String, String](selfCacheTime -> ms.toString)
     OffsetCheckpointClient.cache(map)
   }
 
   protected def submitReadyTime(ms: Long): Unit = {
     val curReadyTime = ms - readyTimeDelay
     if (curReadyTime % readyTimeInterval == 0) {
-      val map = Map[String, String]((selfReadyTime -> curReadyTime.toString))
+      val map = Map[String, String](selfReadyTime -> curReadyTime.toString)
       OffsetCheckpointClient.cache(map)
     }
   }
 
   protected def submitLastProcTime(ms: Long): Unit = {
-    val map = Map[String, String]((selfLastProcTime -> ms.toString))
+    val map = Map[String, String](selfLastProcTime -> ms.toString)
     OffsetCheckpointClient.cache(map)
   }
 
@@ -59,7 +59,7 @@
 
   protected def submitCleanTime(ms: Long): Unit = {
     val cleanTime = genCleanTime(ms)
-    val map = Map[String, String]((selfCleanTime -> cleanTime.toString))
+    val map = Map[String, String](selfCleanTime -> cleanTime.toString)
     OffsetCheckpointClient.cache(map)
   }
 
@@ -68,7 +68,7 @@
   protected def readCleanTime(): Option[Long] = readSelfInfo(selfCleanTime)
 
   protected def submitOldCacheIndex(index: Long): Unit = {
-    val map = Map[String, String]((selfOldCacheIndex -> index.toString))
+    val map = Map[String, String](selfOldCacheIndex -> index.toString)
     OffsetCheckpointClient.cache(map)
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/WithFanIn.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/WithFanIn.scala
index 3b0e7f2..516fbd4 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/WithFanIn.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/cache/WithFanIn.scala
@@ -22,9 +22,9 @@
 import scala.collection.concurrent.{Map => ConcMap, TrieMap}
 
 /**
-  * fan in trait, for multiple input and one output
-  * to support multiple parallel data connectors in one data source
-  */
+ * fan in trait, for multiple input and one output
+ * to support multiple parallel data connectors in one data source
+ */
 trait WithFanIn[T] {
 
   // total input number
@@ -37,21 +37,21 @@
   }
 
   /**
-    * increment for a key, to test if all parallel inputs finished
-    * @param key
-    * @return
-    */
+   * increment for a key, to test if all parallel inputs finished
+   * @param key
+   * @return
+   */
   def fanIncrement(key: T): Boolean = {
     fanInc(key)
     fanInCountMap.get(key) match {
-      case Some(n) if (n >= totalNum.get) => {
+      case Some(n) if n >= totalNum.get =>
         fanInCountMap.remove(key)
         true
-      }
       case _ => false
     }
   }
 
+  @scala.annotation.tailrec
   private def fanInc(key: T): Unit = {
     fanInCountMap.get(key) match {
       case Some(n) =>
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/DataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/DataConnector.scala
index fedf6e3..481c9f9 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/DataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/DataConnector.scala
@@ -19,6 +19,8 @@
 
 import java.util.concurrent.atomic.AtomicLong
 
+import scala.collection.mutable
+
 import org.apache.spark.sql.{DataFrame, SparkSession}
 import org.apache.spark.sql.functions._
 
@@ -40,8 +42,8 @@
   val id: String = DataConnectorIdGenerator.genId
 
   val timestampStorage: TimestampStorage
-  protected def saveTmst(t: Long) = timestampStorage.insert(t)
-  protected def readTmst(t: Long) = timestampStorage.fromUntil(t, t + 1)
+  protected def saveTmst(t: Long): mutable.SortedSet[Long] = timestampStorage.insert(t)
+  protected def readTmst(t: Long): Set[Long] = timestampStorage.fromUntil(t, t + 1)
 
   def init(): Unit
 
@@ -61,7 +63,7 @@
     val dcDfName = dcParam.getDataFrameName("this")
 
     try {
-      saveTmst(timestamp)    // save timestamp
+      saveTmst(timestamp) // save timestamp
 
       dfOpt.flatMap { df =>
         val (preProcRules, thisTable) =
@@ -78,7 +80,7 @@
         preprocJob.execute(context)
 
         // out data
-        val outDf = context.sparkSession.table(s"`${thisTable}`")
+        val outDf = context.sparkSession.table(s"`$thisTable`")
 
         // add tmst column
         val withTmstDf = outDf.withColumn(ConstantColumns.tmst, lit(timestamp))
@@ -91,7 +93,7 @@
 
     } catch {
       case e: Throwable =>
-        error(s"pre-process of data connector [${id}] error: ${e.getMessage}", e)
+        error(s"pre-process of data connector [$id] error: ${e.getMessage}", e)
         None
     }
   }
@@ -102,7 +104,7 @@
   private val head: String = "dc"
 
   def genId: String = {
-    s"${head}${increment}"
+    s"$head$increment"
   }
 
   private def increment: Long = {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/DataConnectorFactory.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/DataConnectorFactory.scala
index 0cf9f56..2578470 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/DataConnectorFactory.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/DataConnectorFactory.scala
@@ -18,6 +18,7 @@
 package org.apache.griffin.measure.datasource.connector
 
 import scala.util.Try
+import scala.util.matching.Regex
 
 import org.apache.spark.sql.SparkSession
 import org.apache.spark.streaming.StreamingContext
@@ -29,58 +30,61 @@
 import org.apache.griffin.measure.datasource.connector.batch._
 import org.apache.griffin.measure.datasource.connector.streaming._
 
-
 object DataConnectorFactory extends Loggable {
 
-  val HiveRegex = """^(?i)hive$""".r
-  val AvroRegex = """^(?i)avro$""".r
-  val FileRegex = """^(?i)file$""".r
-  val TextDirRegex = """^(?i)text-dir$""".r
+  val HiveRegex: Regex = """^(?i)hive$""".r
+  val AvroRegex: Regex = """^(?i)avro$""".r
+  val FileRegex: Regex = """^(?i)file$""".r
+  val TextDirRegex: Regex = """^(?i)text-dir$""".r
 
-  val KafkaRegex = """^(?i)kafka$""".r
+  val KafkaRegex: Regex = """^(?i)kafka$""".r
 
-  val CustomRegex = """^(?i)custom$""".r
+  val CustomRegex: Regex = """^(?i)custom$""".r
 
   /**
-    * create data connector
-    * @param sparkSession     spark env
-    * @param ssc              spark streaming env
-    * @param dcParam          data connector param
-    * @param tmstCache        same tmst cache in one data source
-    * @param streamingCacheClientOpt   for streaming cache
-    * @return   data connector
-    */
-  def getDataConnector(sparkSession: SparkSession,
-                       ssc: StreamingContext,
-                       dcParam: DataConnectorParam,
-                       tmstCache: TimestampStorage,
-                       streamingCacheClientOpt: Option[StreamingCacheClient]
-                      ): Try[DataConnector] = {
+   * create data connector
+   * @param sparkSession     spark env
+   * @param ssc              spark streaming env
+   * @param dcParam          data connector param
+   * @param tmstCache        same tmst cache in one data source
+   * @param streamingCacheClientOpt   for streaming cache
+   * @return   data connector
+   */
+  def getDataConnector(
+      sparkSession: SparkSession,
+      ssc: StreamingContext,
+      dcParam: DataConnectorParam,
+      tmstCache: TimestampStorage,
+      streamingCacheClientOpt: Option[StreamingCacheClient]): Try[DataConnector] = {
     val conType = dcParam.getType
-    val version = dcParam.getVersion
     Try {
       conType match {
         case HiveRegex() => HiveBatchDataConnector(sparkSession, dcParam, tmstCache)
         case AvroRegex() => AvroBatchDataConnector(sparkSession, dcParam, tmstCache)
         case FileRegex() => FileBasedDataConnector(sparkSession, dcParam, tmstCache)
         case TextDirRegex() => TextDirBatchDataConnector(sparkSession, dcParam, tmstCache)
-        case CustomRegex() => getCustomConnector(sparkSession, ssc, dcParam, tmstCache, streamingCacheClientOpt)
+        case CustomRegex() =>
+          getCustomConnector(sparkSession, ssc, dcParam, tmstCache, streamingCacheClientOpt)
         case KafkaRegex() =>
-          getStreamingDataConnector(sparkSession, ssc, dcParam, tmstCache, streamingCacheClientOpt)
+          getStreamingDataConnector(
+            sparkSession,
+            ssc,
+            dcParam,
+            tmstCache,
+            streamingCacheClientOpt)
         case _ => throw new Exception("connector creation error!")
       }
     }
   }
 
-  private def getStreamingDataConnector(sparkSession: SparkSession,
-                                        ssc: StreamingContext,
-                                        dcParam: DataConnectorParam,
-                                        tmstCache: TimestampStorage,
-                                        streamingCacheClientOpt: Option[StreamingCacheClient]
-                                       ): StreamingDataConnector = {
+  private def getStreamingDataConnector(
+      sparkSession: SparkSession,
+      ssc: StreamingContext,
+      dcParam: DataConnectorParam,
+      tmstCache: TimestampStorage,
+      streamingCacheClientOpt: Option[StreamingCacheClient]): StreamingDataConnector = {
     if (ssc == null) throw new Exception("streaming context is null!")
     val conType = dcParam.getType
-    val version = dcParam.getVersion
     conType match {
       case KafkaRegex() =>
         getKafkaDataConnector(sparkSession, ssc, dcParam, tmstCache, streamingCacheClientOpt)
@@ -88,41 +92,46 @@
     }
   }
 
-  private def getCustomConnector(sparkSession: SparkSession,
-                                 ssc: StreamingContext,
-                                 dcParam: DataConnectorParam,
-                                 timestampStorage: TimestampStorage,
-                                 streamingCacheClientOpt: Option[StreamingCacheClient]): DataConnector = {
+  private def getCustomConnector(
+      sparkSession: SparkSession,
+      ssc: StreamingContext,
+      dcParam: DataConnectorParam,
+      timestampStorage: TimestampStorage,
+      streamingCacheClientOpt: Option[StreamingCacheClient]): DataConnector = {
     val className = dcParam.getConfig("class").asInstanceOf[String]
     val cls = Class.forName(className)
     if (classOf[BatchDataConnector].isAssignableFrom(cls)) {
-      val method = cls.getDeclaredMethod("apply",
+      val method = cls.getDeclaredMethod(
+        "apply",
         classOf[SparkSession],
         classOf[DataConnectorParam],
-        classOf[TimestampStorage]
-      )
-      method.invoke(null, sparkSession, dcParam, timestampStorage).asInstanceOf[BatchDataConnector]
+        classOf[TimestampStorage])
+      method
+        .invoke(null, sparkSession, dcParam, timestampStorage)
+        .asInstanceOf[BatchDataConnector]
     } else if (classOf[StreamingDataConnector].isAssignableFrom(cls)) {
-      val method = cls.getDeclaredMethod("apply",
+      val method = cls.getDeclaredMethod(
+        "apply",
         classOf[SparkSession],
         classOf[StreamingContext],
         classOf[DataConnectorParam],
         classOf[TimestampStorage],
-        classOf[Option[StreamingCacheClient]]
-      )
-      method.invoke(null, sparkSession, ssc, dcParam, timestampStorage, streamingCacheClientOpt)
+        classOf[Option[StreamingCacheClient]])
+      method
+        .invoke(null, sparkSession, ssc, dcParam, timestampStorage, streamingCacheClientOpt)
         .asInstanceOf[StreamingDataConnector]
     } else {
-      throw new ClassCastException(s"$className should extend BatchDataConnector or StreamingDataConnector")
+      throw new ClassCastException(
+        s"$className should extend BatchDataConnector or StreamingDataConnector")
     }
   }
 
-  private def getKafkaDataConnector(sparkSession: SparkSession,
-                                    ssc: StreamingContext,
-                                    dcParam: DataConnectorParam,
-                                    tmstCache: TimestampStorage,
-                                    streamingCacheClientOpt: Option[StreamingCacheClient]
-                                   ): KafkaStreamingDataConnector = {
+  private def getKafkaDataConnector(
+      sparkSession: SparkSession,
+      ssc: StreamingContext,
+      dcParam: DataConnectorParam,
+      tmstCache: TimestampStorage,
+      streamingCacheClientOpt: Option[StreamingCacheClient]): KafkaStreamingDataConnector = {
     val KeyType = "key.type"
     val ValueType = "value.type"
     val config = dcParam.getConfig
@@ -132,12 +141,14 @@
     (keyType, valueType) match {
       case ("java.lang.String", "java.lang.String") =>
         KafkaStreamingStringDataConnector(
-          sparkSession, ssc, dcParam, tmstCache, streamingCacheClientOpt)
+          sparkSession,
+          ssc,
+          dcParam,
+          tmstCache,
+          streamingCacheClientOpt)
       case _ =>
         throw new Exception("not supported type kafka data connector")
     }
   }
 
-
-
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/AvroBatchDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/AvroBatchDataConnector.scala
index 893a72f..dcedf48 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/AvroBatchDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/AvroBatchDataConnector.scala
@@ -26,12 +26,13 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * batch data connector for avro file
-  */
-case class AvroBatchDataConnector(@transient sparkSession: SparkSession,
-                                  dcParam: DataConnectorParam,
-                                  timestampStorage: TimestampStorage
-                                 ) extends BatchDataConnector {
+ * batch data connector for avro file
+ */
+case class AvroBatchDataConnector(
+    @transient sparkSession: SparkSession,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage)
+    extends BatchDataConnector {
 
   val config: Map[String, Any] = dcParam.getConfig
 
@@ -63,5 +64,4 @@
     (dfOpt, TimeRange(ms, tmsts))
   }
 
-
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/CassandraDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/CassandraDataConnector.scala
index 8667b11..d135f3b 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/CassandraDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/CassandraDataConnector.scala
@@ -24,9 +24,11 @@
 import org.apache.griffin.measure.datasource.TimestampStorage
 import org.apache.griffin.measure.utils.ParamUtil._
 
-case class CassandraDataConnector(@transient sparkSession: SparkSession,
-                                  dcParam: DataConnectorParam,
-                                  timestampStorage: TimestampStorage) extends BatchDataConnector {
+case class CassandraDataConnector(
+    @transient sparkSession: SparkSession,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage)
+    extends BatchDataConnector {
 
   val config: Map[String, Any] = dcParam.getConfig
 
@@ -56,7 +58,6 @@
       sparkSession.conf.set("spark.cassandra.auth.username", user)
       sparkSession.conf.set("spark.cassandra.auth.password", password)
 
-
       val tableDef: DataFrameReader = sparkSession.read
         .format("org.apache.spark.sql.cassandra")
         .options(Map("table" -> tableName, "keyspace" -> database))
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/ElasticSearchGriffinDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/ElasticSearchGriffinDataConnector.scala
index 186e625..b70ba51 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/ElasticSearchGriffinDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/ElasticSearchGriffinDataConnector.scala
@@ -37,10 +37,11 @@
 import org.apache.griffin.measure.datasource.TimestampStorage
 import org.apache.griffin.measure.utils.ParamUtil._
 
-
-case class ElasticSearchGriffinDataConnector(@transient sparkSession: SparkSession,
-                                             dcParam: DataConnectorParam,
-                                             timestampStorage: TimestampStorage) extends BatchDataConnector {
+case class ElasticSearchGriffinDataConnector(
+    @transient sparkSession: SparkSession,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage)
+    extends BatchDataConnector {
 
   lazy val getBaseUrl = s"http://$host:$port"
   val config: scala.Predef.Map[scala.Predef.String, scala.Any] = dcParam.getConfig
@@ -71,7 +72,8 @@
       val data = ArrayBuffer[Map[String, Number]]()
 
       if (answer._1) {
-        val arrayAnswers: util.Iterator[JsonNode] = parseString(answer._2).get("hits").get("hits").elements()
+        val arrayAnswers: util.Iterator[JsonNode] =
+          parseString(answer._2).get("hits").get("hits").elements()
 
         while (arrayAnswers.hasNext) {
           val answer = arrayAnswers.next()
@@ -90,8 +92,7 @@
       val defaultNumber: Number = 0.0
       val rdd: RDD[Row] = rdd1
         .map { x: Map[String, Number] =>
-          Row(
-            columns.map(c => x.getOrElse(c, defaultNumber).doubleValue()): _*)
+          Row(columns.map(c => x.getOrElse(c, defaultNumber).doubleValue()): _*)
         }
       val schema = dfSchema(columns.toList)
       val df: DataFrame = sparkSession.createDataFrame(rdd, schema).limit(size)
@@ -126,15 +127,15 @@
   def parseString(data: String): JsonNode = {
     val mapper = new ObjectMapper()
     mapper.registerModule(DefaultScalaModule)
-    val reader = new BufferedReader(new InputStreamReader(new ByteArrayInputStream(data.getBytes)))
+    val reader = new BufferedReader(
+      new InputStreamReader(new ByteArrayInputStream(data.getBytes)))
     mapper.readTree(reader)
   }
 
   def dfSchema(columnNames: List[String]): StructType = {
     val a: Seq[StructField] = columnNames
-      .map(x => StructField(name = x,
-        dataType = org.apache.spark.sql.types.DoubleType,
-        nullable = false))
+      .map(x =>
+        StructField(name = x, dataType = org.apache.spark.sql.types.DoubleType, nullable = false))
     StructType(a)
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/FileBasedDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/FileBasedDataConnector.scala
index 4b9bfd7..086596b 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/FileBasedDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/FileBasedDataConnector.scala
@@ -52,19 +52,22 @@
  *  - `header` is false,
  *  - `format` is parquet
  */
-case class FileBasedDataConnector(@transient sparkSession: SparkSession,
-                                  dcParam: DataConnectorParam,
-                                  timestampStorage: TimestampStorage)
-  extends BatchDataConnector {
+case class FileBasedDataConnector(
+    @transient sparkSession: SparkSession,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage)
+    extends BatchDataConnector {
 
   import FileBasedDataConnector._
 
   val config: Map[String, Any] = dcParam.getConfig
-  var options: MutableMap[String, String] = MutableMap(config.getParamStringMap(Options, Map.empty).toSeq: _*)
+  val options: MutableMap[String, String] = MutableMap(
+    config.getParamStringMap(Options, Map.empty).toSeq: _*)
 
   var format: String = config.getString(Format, DefaultFormat).toLowerCase
   val paths: Seq[String] = config.getStringArr(Paths, Nil)
-  val schemaSeq: Seq[Map[String, String]] = config.getAnyRef[Seq[Map[String, String]]](Schema, Nil)
+  val schemaSeq: Seq[Map[String, String]] =
+    config.getAnyRef[Seq[Map[String, String]]](Schema, Nil)
   val skipErrorPaths: Boolean = config.getBoolean(SkipErrorPaths, defValue = false)
 
   val currentSchema: Option[StructType] = Try(getUserDefinedSchema) match {
@@ -72,7 +75,8 @@
     case _ => None
   }
 
-  assert(SupportedFormats.contains(format),
+  assert(
+    SupportedFormats.contains(format),
     s"Invalid format '$format' specified. Must be one of ${SupportedFormats.mkString("['", "', '", "']")}")
 
   if (format == "csv") validateCSVOptions()
@@ -110,11 +114,13 @@
    */
   private def validateCSVOptions(): Unit = {
     if (options.contains(Header) && config.contains(Schema)) {
-      griffinLogger.warn(s"Both $Options.$Header and $Schema were provided. Defaulting to provided $Schema")
+      griffinLogger.warn(
+        s"Both $Options.$Header and $Schema were provided. Defaulting to provided $Schema")
     }
 
     if (!options.contains(Header) && !config.contains(Schema)) {
-      throw new IllegalArgumentException(s"Either '$Header' must be set in '$Options' or '$Schema' must be set.")
+      throw new IllegalArgumentException(
+        s"Either '$Header' must be set in '$Options' or '$Schema' must be set.")
     }
 
     if (config.contains(Schema) && (schemaSeq.isEmpty || currentSchema.isEmpty)) {
@@ -132,9 +138,7 @@
           .options(options)
           .format(format)
           .withSchemaIfAny(currentSchema)
-          .load(validPaths: _*)
-
-      )
+          .load(validPaths: _*))
       val preDfOpt = preProcess(dfOpt, ms)
       preDfOpt
     }
@@ -169,16 +173,16 @@
    * @return
    */
   private def getValidPaths(paths: Seq[String], skipOnError: Boolean): Seq[String] = {
-    val validPaths = paths.filter(path =>
-      if (HdfsUtil.existPath(path)) true
-      else {
-        val msg = s"Path '$path' does not exist!"
-        if (skipOnError) griffinLogger.error(msg)
-        else throw new IllegalArgumentException(msg)
+    val validPaths = paths.filter(
+      path =>
+        if (HdfsUtil.existPath(path)) true
+        else {
+          val msg = s"Path '$path' does not exist!"
+          if (skipOnError) griffinLogger.error(msg)
+          else throw new IllegalArgumentException(msg)
 
-        false
-      }
-    )
+          false
+      })
 
     assert(validPaths.nonEmpty, "No paths were given for the data source.")
     validPaths
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/HiveBatchDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/HiveBatchDataConnector.scala
index df2c850..e9c27aa 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/HiveBatchDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/HiveBatchDataConnector.scala
@@ -25,29 +25,30 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * batch data connector for hive table
-  */
-case class HiveBatchDataConnector(@transient sparkSession: SparkSession,
-                                  dcParam: DataConnectorParam,
-                                  timestampStorage: TimestampStorage
-                                 ) extends BatchDataConnector {
+ * batch data connector for hive table
+ */
+case class HiveBatchDataConnector(
+    @transient sparkSession: SparkSession,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage)
+    extends BatchDataConnector {
 
-  val config = dcParam.getConfig
+  val config: Map[String, Any] = dcParam.getConfig
 
   val Database = "database"
   val TableName = "table.name"
   val Where = "where"
 
-  val database = config.getString(Database, "default")
-  val tableName = config.getString(TableName, "")
-  val whereString = config.getString(Where, "")
+  val database: String = config.getString(Database, "default")
+  val tableName: String = config.getString(TableName, "")
+  val whereString: String = config.getString(Where, "")
 
-  val concreteTableName = s"${database}.${tableName}"
-  val wheres = whereString.split(",").map(_.trim).filter(_.nonEmpty)
+  val concreteTableName = s"$database.$tableName"
+  val wheres: Array[String] = whereString.split(",").map(_.trim).filter(_.nonEmpty)
 
   def data(ms: Long): (Option[DataFrame], TimeRange) = {
     val dfOpt = {
-      val dtSql = dataSql
+      val dtSql = dataSql()
       info(dtSql)
       val df = sparkSession.sql(dtSql)
       val dfOpt = Some(df)
@@ -58,21 +59,11 @@
     (dfOpt, TimeRange(ms, tmsts))
   }
 
-
-  private def tableExistsSql(): String = {
-//    s"SHOW TABLES LIKE '${concreteTableName}'"    // this is hive sql, but not work for spark sql
-    s"tableName LIKE '${tableName}'"
-  }
-
-  private def metaDataSql(): String = {
-    s"DESCRIBE ${concreteTableName}"
-  }
-
   private def dataSql(): String = {
-    val tableClause = s"SELECT * FROM ${concreteTableName}"
+    val tableClause = s"SELECT * FROM $concreteTableName"
     if (wheres.length > 0) {
       val clauses = wheres.map { w =>
-        s"${tableClause} WHERE ${w}"
+        s"$tableClause WHERE $w"
       }
       clauses.mkString(" UNION ALL ")
     } else tableClause
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/MySqlDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/MySqlDataConnector.scala
index 6c4b2a8..05d5d87 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/MySqlDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/MySqlDataConnector.scala
@@ -24,9 +24,11 @@
 import org.apache.griffin.measure.datasource.TimestampStorage
 import org.apache.griffin.measure.utils.ParamUtil._
 
-case class MySqlDataConnector(@transient sparkSession: SparkSession,
-                              dcParam: DataConnectorParam,
-                              timestampStorage: TimestampStorage) extends BatchDataConnector {
+case class MySqlDataConnector(
+    @transient sparkSession: SparkSession,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage)
+    extends BatchDataConnector {
 
   val Database = "database"
   val TableName = "table.name"
@@ -47,7 +49,6 @@
 
   override def data(ms: Long): (Option[DataFrame], TimeRange) = {
 
-
     val dfOpt = try {
       val dtSql = dataSql()
       val prop = new java.util.Properties
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/TextDirBatchDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/TextDirBatchDataConnector.scala
index 0e2eb31..35bcaa3 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/TextDirBatchDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/batch/TextDirBatchDataConnector.scala
@@ -26,24 +26,25 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * batch data connector for directory with text format data in the nth depth sub-directories
-  */
-case class TextDirBatchDataConnector(@transient sparkSession: SparkSession,
-                                     dcParam: DataConnectorParam,
-                                     timestampStorage: TimestampStorage
-                                    ) extends BatchDataConnector {
+ * batch data connector for directory with text format data in the nth depth sub-directories
+ */
+case class TextDirBatchDataConnector(
+    @transient sparkSession: SparkSession,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage)
+    extends BatchDataConnector {
 
-  val config = dcParam.getConfig
+  val config: Map[String, Any] = dcParam.getConfig
 
   val DirPath = "dir.path"
   val DataDirDepth = "data.dir.depth"
   val SuccessFile = "success.file"
   val DoneFile = "done.file"
 
-  val dirPath = config.getString(DirPath, "")
-  val dataDirDepth = config.getInt(DataDirDepth, 0)
-  val successFile = config.getString(SuccessFile, "_SUCCESS")
-  val doneFile = config.getString(DoneFile, "_DONE")
+  val dirPath: String = config.getString(DirPath, "")
+  val dataDirDepth: Int = config.getInt(DataDirDepth, 0)
+  val successFile: String = config.getString(SuccessFile, "_SUCCESS")
+  val doneFile: String = config.getString(DoneFile, "_DONE")
 
   val ignoreFilePrefix = "_"
 
@@ -52,7 +53,7 @@
   }
 
   def data(ms: Long): (Option[DataFrame], TimeRange) = {
-    assert(dirExist(), s"Text dir ${dirPath} is not exists!")
+    assert(dirExist(), s"Text dir $dirPath is not exists!")
     val dfOpt = {
       val dataDirs = listSubDirs(dirPath :: Nil, dataDirDepth, readable)
       // touch done file for read dirs
@@ -73,10 +74,14 @@
     (dfOpt, TimeRange(ms, tmsts))
   }
 
-  private def listSubDirs(paths: Seq[String],
-                          depth: Int,
-                          filteFunc: (String) => Boolean): Seq[String] = {
-    val subDirs = paths.flatMap { path => HdfsUtil.listSubPathsByType(path, "dir", true) }
+  @scala.annotation.tailrec
+  private def listSubDirs(
+      paths: Seq[String],
+      depth: Int,
+      filteFunc: String => Boolean): Seq[String] = {
+    val subDirs = paths.flatMap { path =>
+      HdfsUtil.listSubPathsByType(path, "dir", fullPath = true)
+    }
     if (depth <= 0) {
       subDirs.filter(filteFunc)
     } else {
@@ -92,7 +97,7 @@
     HdfsUtil.createEmptyFile(HdfsUtil.getHdfsFilePath(dir, doneFile))
 
   private def emptyDir(dir: String): Boolean = {
-    HdfsUtil.listSubPathsByType(dir, "file").filter(!_.startsWith(ignoreFilePrefix)).size == 0
+    HdfsUtil.listSubPathsByType(dir, "file").forall(_.startsWith(ignoreFilePrefix))
   }
 
 //  def metaData(): Try[Iterable[(String, String)]] = {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/KafkaStreamingDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/KafkaStreamingDataConnector.scala
index bcfba56..0c501dc 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/KafkaStreamingDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/KafkaStreamingDataConnector.scala
@@ -25,27 +25,27 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * streaming data connector for kafka
-  */
+ * streaming data connector for kafka
+ */
 trait KafkaStreamingDataConnector extends StreamingDataConnector {
 
   type KD <: Decoder[K]
   type VD <: Decoder[V]
   type OUT = (K, V)
 
-  val config = dcParam.getConfig
+  val config: Map[String, Any] = dcParam.getConfig
 
   val KafkaConfig = "kafka.config"
   val Topics = "topics"
 
-  val kafkaConfig = config.getAnyRef(KafkaConfig, Map[String, String]())
-  val topics = config.getString(Topics, "")
+  val kafkaConfig: Map[String, String] = config.getAnyRef(KafkaConfig, Map[String, String]())
+  val topics: String = config.getString(Topics, "")
 
   def init(): Unit = {
     // register fan in
-    streamingCacheClientOpt.foreach(_.registerFanIn)
+    streamingCacheClientOpt.foreach(_.registerFanIn())
 
-    val ds = stream match {
+    val ds = stream() match {
       case Success(dstream) => dstream
       case Failure(ex) => throw ex
     }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/KafkaStreamingStringDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/KafkaStreamingStringDataConnector.scala
index 1623efb..94501cb 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/KafkaStreamingStringDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/KafkaStreamingStringDataConnector.scala
@@ -30,14 +30,15 @@
 import org.apache.griffin.measure.datasource.cache.StreamingCacheClient
 
 /**
-  * streaming data connector for kafka with string format key and value
-  */
-case class KafkaStreamingStringDataConnector(@transient sparkSession: SparkSession,
-                                             @transient ssc: StreamingContext,
-                                             dcParam: DataConnectorParam,
-                                             timestampStorage: TimestampStorage,
-                                             streamingCacheClientOpt: Option[StreamingCacheClient]
-                                            ) extends KafkaStreamingDataConnector {
+ * streaming data connector for kafka with string format key and value
+ */
+case class KafkaStreamingStringDataConnector(
+    @transient sparkSession: SparkSession,
+    @transient ssc: StreamingContext,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage,
+    streamingCacheClientOpt: Option[StreamingCacheClient])
+    extends KafkaStreamingDataConnector {
 
   type K = String
   type KD = StringDecoder
@@ -45,22 +46,21 @@
   type VD = StringDecoder
 
   val valueColName = "value"
-  val schema = StructType(Array(
-    StructField(valueColName, StringType)
-  ))
+  val schema: StructType = StructType(Array(StructField(valueColName, StringType)))
 
   def createDStream(topicSet: Set[String]): InputDStream[OUT] = {
     KafkaUtils.createDirectStream[K, V, KD, VD](ssc, kafkaConfig, topicSet)
   }
 
   def transform(rdd: RDD[OUT]): Option[DataFrame] = {
-    if (rdd.isEmpty) None else {
+    if (rdd.isEmpty) None
+    else {
       try {
         val rowRdd = rdd.map(d => Row(d._2))
         val df = sparkSession.createDataFrame(rowRdd, schema)
         Some(df)
       } catch {
-        case e: Throwable =>
+        case _: Throwable =>
           error("streaming data transform fails")
           None
       }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/StreamingDataConnector.scala b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/StreamingDataConnector.scala
index a1d4a64..67d1bb9 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/StreamingDataConnector.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/datasource/connector/streaming/StreamingDataConnector.scala
@@ -27,7 +27,6 @@
 import org.apache.griffin.measure.datasource.cache.StreamingCacheClient
 import org.apache.griffin.measure.datasource.connector.DataConnector
 
-
 trait StreamingDataConnector extends DataConnector {
 
   type K
diff --git a/measure/src/main/scala/org/apache/griffin/measure/job/DQJob.scala b/measure/src/main/scala/org/apache/griffin/measure/job/DQJob.scala
index c2a71ca..4b19cd6 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/job/DQJob.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/job/DQJob.scala
@@ -23,12 +23,10 @@
 case class DQJob(dqSteps: Seq[DQStep]) extends Serializable {
 
   /**
-    * @return execution success
-    */
+   * @return execution success
+   */
   def execute(context: DQContext): Boolean = {
-    dqSteps.foldLeft(true) { (ret, dqStep) =>
-      ret && dqStep.execute(context)
-    }
+    dqSteps.forall(dqStep => dqStep.execute(context))
   }
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/job/builder/DQJobBuilder.scala b/measure/src/main/scala/org/apache/griffin/measure/job/builder/DQJobBuilder.scala
index 5176246..9bf0937 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/job/builder/DQJobBuilder.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/job/builder/DQJobBuilder.scala
@@ -24,27 +24,27 @@
 import org.apache.griffin.measure.step.write.MetricFlushStep
 
 /**
-  * build dq job based on configuration
-  */
+ * build dq job based on configuration
+ */
 object DQJobBuilder {
 
   /**
-    * build dq job with rule param
-    * @param context              dq context
-    * @param evaluateRuleParam    evaluate rule param
-    * @return       dq job
-    */
+   * build dq job with rule param
+   * @param context              dq context
+   * @param evaluateRuleParam    evaluate rule param
+   * @return       dq job
+   */
   def buildDQJob(context: DQContext, evaluateRuleParam: EvaluateRuleParam): DQJob = {
     val ruleParams = evaluateRuleParam.getRules
     buildDQJob(context, ruleParams)
   }
 
   /**
-    * build dq job with rules in evaluate rule param or pre-proc param
-    * @param context          dq context
-    * @param ruleParams       rule params
-    * @return       dq job
-    */
+   * build dq job with rules in evaluate rule param or pre-proc param
+   * @param context          dq context
+   * @param ruleParams       rule params
+   * @return       dq job
+   */
   def buildDQJob(context: DQContext, ruleParams: Seq[RuleParam]): DQJob = {
     // build steps by datasources
     val dsSteps = context.dataSources.flatMap { dataSource =>
diff --git a/measure/src/main/scala/org/apache/griffin/measure/launch/DQApp.scala b/measure/src/main/scala/org/apache/griffin/measure/launch/DQApp.scala
index 75e0631..bc358fa 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/launch/DQApp.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/launch/DQApp.scala
@@ -25,8 +25,8 @@
 import org.apache.griffin.measure.configuration.dqdefinition.{DQConfig, EnvConfig, SinkParam}
 
 /**
-  * dq application process
-  */
+ * dq application process
+ */
 trait DQApp extends Loggable with Serializable {
 
   val envParam: EnvConfig
@@ -37,22 +37,22 @@
   def init: Try[_]
 
   /**
-    * @return execution success
-    */
+   * @return execution success
+   */
   def run: Try[Boolean]
 
   def close: Try[_]
 
   /**
-    * application will exit if it fails in run phase.
-    * if retryable is true, the exception will be threw to spark env,
-    * and enable retry strategy of spark application
-    */
+   * application will exit if it fails in run phase.
+   * if retryable is true, the exception will be threw to spark env,
+   * and enable retry strategy of spark application
+   */
   def retryable: Boolean
 
   /**
-    * timestamp as a key for metrics
-    */
+   * timestamp as a key for metrics
+   */
   protected def getMeasureTime: Long = {
     dqParam.getTimestampOpt match {
       case Some(t) if t > 0 => t
diff --git a/measure/src/main/scala/org/apache/griffin/measure/launch/batch/BatchDQApp.scala b/measure/src/main/scala/org/apache/griffin/measure/launch/batch/BatchDQApp.scala
index 3c42094..b23dd93 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/launch/batch/BatchDQApp.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/launch/batch/BatchDQApp.scala
@@ -32,15 +32,14 @@
 import org.apache.griffin.measure.launch.DQApp
 import org.apache.griffin.measure.step.builder.udf.GriffinUDFAgent
 
-
 case class BatchDQApp(allParam: GriffinConfig) extends DQApp {
 
   val envParam: EnvConfig = allParam.getEnvConfig
   val dqParam: DQConfig = allParam.getDqConfig
 
-  val sparkParam = envParam.getSparkParam
-  val metricName = dqParam.getName
-  val sinkParams = getSinkParams
+  val sparkParam: SparkParam = envParam.getSparkParam
+  val metricName: String = dqParam.getName
+  val sinkParams: Seq[SinkParam] = getSinkParams
 
   var dqContext: DQContext = _
 
@@ -52,7 +51,7 @@
     conf.setAll(sparkParam.getConfig)
     conf.set("spark.sql.crossJoin.enabled", "true")
     sparkSession = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
-    val logLevel = getGriffinLogLevel()
+    val logLevel = getGriffinLogLevel
     sparkSession.sparkContext.setLogLevel(sparkParam.getLogLevel)
     griffinLogger.setLevel(logLevel)
 
@@ -69,16 +68,15 @@
 
     // get data sources
     val dataSources = DataSourceFactory.getDataSources(sparkSession, null, dqParam.getDataSources)
-    dataSources.foreach(_.init)
+    dataSources.foreach(_.init())
 
     // create dq context
-    dqContext = DQContext(
-      contextId, metricName, dataSources, sinkParams, BatchProcessType
-    )(sparkSession)
+    dqContext =
+      DQContext(contextId, metricName, dataSources, sinkParams, BatchProcessType)(sparkSession)
 
     // start id
     val applicationId = sparkSession.sparkContext.applicationId
-    dqContext.getSink().start(applicationId)
+    dqContext.getSink.start(applicationId)
 
     // build job
     val dqJob = DQJobBuilder.buildDQJob(dqContext, dqParam.getEvaluateRule)
@@ -88,13 +86,13 @@
 
     // end time
     val endTime = new Date().getTime
-    dqContext.getSink().log(endTime, s"process using time: ${endTime - startTime} ms")
+    dqContext.getSink.log(endTime, s"process using time: ${endTime - startTime} ms")
 
     // clean context
     dqContext.clean()
 
     // finish
-    dqContext.getSink().finish()
+    dqContext.getSink.finish()
 
     result
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/launch/streaming/StreamingDQApp.scala b/measure/src/main/scala/org/apache/griffin/measure/launch/streaming/StreamingDQApp.scala
index 44dca49..f91a003 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/launch/streaming/StreamingDQApp.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/launch/streaming/StreamingDQApp.scala
@@ -30,12 +30,14 @@
 import org.apache.griffin.measure.configuration.dqdefinition._
 import org.apache.griffin.measure.configuration.enums.ProcessType.StreamingProcessType
 import org.apache.griffin.measure.context._
+import org.apache.griffin.measure.context.streaming.checkpoint.lock.CheckpointLock
 import org.apache.griffin.measure.context.streaming.checkpoint.offset.OffsetCheckpointClient
 import org.apache.griffin.measure.context.streaming.metric.CacheResults
 import org.apache.griffin.measure.datasource.DataSourceFactory
 import org.apache.griffin.measure.job.DQJob
 import org.apache.griffin.measure.job.builder.DQJobBuilder
 import org.apache.griffin.measure.launch.DQApp
+import org.apache.griffin.measure.sink.Sink
 import org.apache.griffin.measure.step.builder.udf.GriffinUDFAgent
 import org.apache.griffin.measure.utils.{HdfsUtil, TimeUtil}
 
@@ -44,9 +46,9 @@
   val envParam: EnvConfig = allParam.getEnvConfig
   val dqParam: DQConfig = allParam.getDqConfig
 
-  val sparkParam = envParam.getSparkParam
-  val metricName = dqParam.getName
-  val sinkParams = getSinkParams
+  val sparkParam: SparkParam = envParam.getSparkParam
+  val metricName: String = dqParam.getName
+  val sinkParams: Seq[SinkParam] = getSinkParams
 
   def retryable: Boolean = true
 
@@ -56,16 +58,16 @@
     conf.setAll(sparkParam.getConfig)
     conf.set("spark.sql.crossJoin.enabled", "true")
     sparkSession = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
-    val logLevel = getGriffinLogLevel()
+    val logLevel = getGriffinLogLevel
     sparkSession.sparkContext.setLogLevel(sparkParam.getLogLevel)
     griffinLogger.setLevel(logLevel)
 
     // clear checkpoint directory
-    clearCpDir
+    clearCpDir()
 
     // init info cache instance
     OffsetCheckpointClient.initClient(envParam.getCheckpointParams, metricName)
-    OffsetCheckpointClient.init
+    OffsetCheckpointClient.init()
 
     // register udf
     GriffinUDFAgent.register(sparkSession)
@@ -90,16 +92,16 @@
 
     // generate data sources
     val dataSources = DataSourceFactory.getDataSources(sparkSession, ssc, dqParam.getDataSources)
-    dataSources.foreach(_.init)
+    dataSources.foreach(_.init())
 
     // create dq context
-    val globalContext: DQContext = DQContext(
-      contextId, metricName, dataSources, sinkParams, StreamingProcessType
-    )(sparkSession)
+    val globalContext: DQContext =
+      DQContext(contextId, metricName, dataSources, sinkParams, StreamingProcessType)(
+        sparkSession)
 
     // start id
     val applicationId = sparkSession.sparkContext.applicationId
-    globalContext.getSink().start(applicationId)
+    globalContext.getSink.start(applicationId)
 
     // process thread
     val dqCalculator = StreamingDQCalculator(globalContext, dqParam.getEvaluateRule)
@@ -119,7 +121,7 @@
     globalContext.clean()
 
     // finish
-    globalContext.getSink().finish()
+    globalContext.getSink.finish()
 
     true
   }
@@ -129,7 +131,6 @@
     sparkSession.stop()
   }
 
-
   def createStreamingContext: StreamingContext = {
     val batchInterval = TimeUtil.milliseconds(sparkParam.getBatchInterval) match {
       case Some(interval) => Milliseconds(interval)
@@ -141,26 +142,25 @@
     ssc
   }
 
-  private def clearCpDir: Unit = {
+  private def clearCpDir(): Unit = {
     if (sparkParam.needInitClear) {
       val cpDir = sparkParam.getCpDir
-      info(s"clear checkpoint directory ${cpDir}")
+      info(s"clear checkpoint directory $cpDir")
       HdfsUtil.deleteHdfsPath(cpDir)
     }
   }
 
-
   /**
-    *
-    * @param globalContext
-    * @param evaluateRuleParam
-    */
-  case class StreamingDQCalculator(globalContext: DQContext,
-                                   evaluateRuleParam: EvaluateRuleParam
-                                  ) extends Runnable with Loggable {
+   *
+   * @param globalContext
+   * @param evaluateRuleParam
+   */
+  case class StreamingDQCalculator(globalContext: DQContext, evaluateRuleParam: EvaluateRuleParam)
+      extends Runnable
+      with Loggable {
 
-    val lock = OffsetCheckpointClient.genLock("process")
-    val appSink = globalContext.getSink()
+    val lock: CheckpointLock = OffsetCheckpointClient.genLock("process")
+    val appSink: Sink = globalContext.getSink
 
     var dqContext: DQContext = _
     var dqJob: DQJob = _
@@ -168,12 +168,12 @@
     def run(): Unit = {
       val updateTimeDate = new Date()
       val updateTime = updateTimeDate.getTime
-      println(s"===== [${updateTimeDate}] process begins =====")
+      println(s"===== [$updateTimeDate] process begins =====")
       val locked = lock.lock(5, TimeUnit.SECONDS)
       if (locked) {
         try {
 
-          OffsetCheckpointClient.startOffsetCheckpoint
+          OffsetCheckpointClient.startOffsetCheckpoint()
 
           val startTime = new Date().getTime
           appSink.log(startTime, "starting process ...")
@@ -195,7 +195,7 @@
           val endTime = new Date().getTime
           appSink.log(endTime, s"process using time: ${endTime - startTime} ms")
 
-          OffsetCheckpointClient.endOffsetCheckpoint
+          OffsetCheckpointClient.endOffsetCheckpoint()
 
           // clean old data
           cleanData(dqContext)
@@ -206,21 +206,21 @@
           lock.unlock()
         }
       } else {
-        println(s"===== [${updateTimeDate}] process ignores =====")
+        println(s"===== [$updateTimeDate] process ignores =====")
       }
       val endTime = new Date().getTime
-      println(s"===== [${updateTimeDate}] process ends, using ${endTime - updateTime} ms =====")
+      println(s"===== [$updateTimeDate] process ends, using ${endTime - updateTime} ms =====")
     }
 
     // finish calculation for this round
     private def finishCalculation(context: DQContext): Unit = {
-      context.dataSources.foreach(_.processFinish)
+      context.dataSources.foreach(_.processFinish())
     }
 
     // clean old data and old result cache
     private def cleanData(context: DQContext): Unit = {
       try {
-        context.dataSources.foreach(_.cleanOldData)
+        context.dataSources.foreach(_.cleanOldData())
 
         context.clean()
 
@@ -233,19 +233,19 @@
 
   }
 
-
   /**
-    *
-    * @param interval
-    * @param runnable
-    */
+   *
+   * @param interval
+   * @param runnable
+   */
   case class Scheduler(interval: Long, runnable: Runnable) {
 
-    val pool: ThreadPoolExecutor = Executors.newFixedThreadPool(5).asInstanceOf[ThreadPoolExecutor]
+    val pool: ThreadPoolExecutor =
+      Executors.newFixedThreadPool(5).asInstanceOf[ThreadPoolExecutor]
 
     val timer = new Timer("process", true)
 
-    val timerTask = new TimerTask() {
+    val timerTask: TimerTask = new TimerTask() {
       override def run(): Unit = {
         pool.submit(runnable)
       }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/sink/ConsoleSink.scala b/measure/src/main/scala/org/apache/griffin/measure/sink/ConsoleSink.scala
index e3d878e..5bfa3e6 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/sink/ConsoleSink.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/sink/ConsoleSink.scala
@@ -23,30 +23,28 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * sink metric and record to console, for debug
-  */
-case class ConsoleSink(
-                        config: Map[String, Any],
-                        metricName: String,
-                        timeStamp: Long) extends Sink {
+ * sink metric and record to console, for debug
+ */
+case class ConsoleSink(config: Map[String, Any], metricName: String, timeStamp: Long)
+    extends Sink {
 
   val block: Boolean = true
 
   val MaxLogLines = "max.log.lines"
 
-  val maxLogLines = config.getInt(MaxLogLines, 100)
+  val maxLogLines: Int = config.getInt(MaxLogLines, 100)
 
   def available(): Boolean = true
 
   def start(msg: String): Unit = {
-    println(s"[${timeStamp}] ${metricName} start: ${msg}")
+    println(s"[$timeStamp] $metricName start: $msg")
   }
   def finish(): Unit = {
-    println(s"[${timeStamp}] ${metricName} finish")
+    println(s"[$timeStamp] $metricName finish")
   }
 
   def log(rt: Long, msg: String): Unit = {
-    println(s"[${timeStamp}] ${rt}: ${msg}")
+    println(s"[$timeStamp] $rt: $msg")
   }
 
   def sinkRecords(records: RDD[String], name: String): Unit = {
@@ -78,10 +76,9 @@
   }
 
   def sinkMetrics(metrics: Map[String, Any]): Unit = {
-    println(s"${metricName} [${timeStamp}] metrics: ")
+    println(s"$metricName [$timeStamp] metrics: ")
     val json = JsonUtil.toJson(metrics)
     println(json)
   }
 
-
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/sink/ElasticSearchSink.scala b/measure/src/main/scala/org/apache/griffin/measure/sink/ElasticSearchSink.scala
index b3bc388..aac0969 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/sink/ElasticSearchSink.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/sink/ElasticSearchSink.scala
@@ -25,27 +25,27 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * sink metric and record through http request
-  */
+ * sink metric and record through http request
+ */
 case class ElasticSearchSink(
-                              config: Map[String, Any],
-                              metricName: String,
-                              timeStamp: Long,
-                              block: Boolean
-                            ) extends Sink {
+    config: Map[String, Any],
+    metricName: String,
+    timeStamp: Long,
+    block: Boolean)
+    extends Sink {
 
   val Api = "api"
   val Method = "method"
   val ConnectionTimeout = "connection.timeout"
   val Retry = "retry"
 
-  val api = config.getString(Api, "")
-  val method = config.getString(Method, "post")
+  val api: String = config.getString(Api, "")
+  val method: String = config.getString(Method, "post")
 
-  val connectionTimeout =
+  val connectionTimeout: Long =
     TimeUtil.milliseconds(config.getString(ConnectionTimeout, "")).getOrElse(-1L)
 
-  val retry = config.getInt(Retry, 10)
+  val retry: Int = config.getInt(Retry, 10)
 
   val _Value = "value"
 
@@ -56,7 +56,7 @@
   def start(msg: String): Unit = {}
   def finish(): Unit = {}
 
-  private def httpResult(dataMap: Map[String, Any]) = {
+  private def httpResult(dataMap: Map[String, Any]): Unit = {
     try {
       val data = JsonUtil.toJson(dataMap)
       // http request
diff --git a/measure/src/main/scala/org/apache/griffin/measure/sink/HdfsSink.scala b/measure/src/main/scala/org/apache/griffin/measure/sink/HdfsSink.scala
index a3d59d8..590e2d4 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/sink/HdfsSink.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/sink/HdfsSink.scala
@@ -26,12 +26,9 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * sink metric and record to hdfs
-  */
-case class HdfsSink(
-                     config: Map[String, Any],
-                     metricName: String,
-                     timeStamp: Long) extends Sink {
+ * sink metric and record to hdfs
+ */
+case class HdfsSink(config: Map[String, Any], metricName: String, timeStamp: Long) extends Sink {
 
   val block: Boolean = true
 
@@ -39,15 +36,15 @@
   val MaxPersistLines = "max.persist.lines"
   val MaxLinesPerFile = "max.lines.per.file"
 
-  val parentPath = config.getOrElse(PathKey, "").toString
-  val maxPersistLines = config.getInt(MaxPersistLines, -1)
-  val maxLinesPerFile = math.min(config.getInt(MaxLinesPerFile, 10000), 1000000)
+  val parentPath: String = config.getOrElse(PathKey, "").toString
+  val maxPersistLines: Int = config.getInt(MaxPersistLines, -1)
+  val maxLinesPerFile: Int = math.min(config.getInt(MaxLinesPerFile, 10000), 1000000)
 
-  val StartFile = filePath("_START")
-  val FinishFile = filePath("_FINISH")
-  val MetricsFile = filePath("_METRICS")
+  val StartFile: String = filePath("_START")
+  val FinishFile: String = filePath("_FINISH")
+  val MetricsFile: String = filePath("_METRICS")
 
-  val LogFile = filePath("_LOG")
+  val LogFile: String = filePath("_LOG")
 
   var _init = true
 
@@ -59,25 +56,25 @@
     if (_init) {
       _init = false
       val dt = new Date(timeStamp)
-      s"================ log of ${dt} ================\n"
+      s"================ log of $dt ================\n"
     } else ""
   }
 
   private def timeHead(rt: Long): String = {
     val dt = new Date(rt)
-    s"--- ${dt} ---\n"
+    s"--- $dt ---\n"
   }
 
   private def logWrap(rt: Long, msg: String): String = {
-    logHead + timeHead(rt) + s"${msg}\n\n"
+    logHead + timeHead(rt) + s"$msg\n\n"
   }
 
   protected def filePath(file: String): String = {
-    HdfsUtil.getHdfsFilePath(parentPath, s"${metricName}/${timeStamp}/${file}")
+    HdfsUtil.getHdfsFilePath(parentPath, s"$metricName/$timeStamp/$file")
   }
 
   protected def withSuffix(path: String, suffix: String): String = {
-    s"${path}.${suffix}"
+    s"$path.$suffix"
   }
 
   def start(msg: String): Unit = {
@@ -108,11 +105,7 @@
   }
 
   private def getHdfsPath(path: String, groupId: Int): String = {
-    HdfsUtil.getHdfsFilePath(path, s"${groupId}")
-  }
-
-  private def getHdfsPath(path: String, ptnId: Int, groupId: Int): String = {
-    HdfsUtil.getHdfsFilePath(path, s"${ptnId}.${groupId}")
+    HdfsUtil.getHdfsFilePath(path, s"$groupId")
   }
 
   private def clearOldRecords(path: String): Unit = {
@@ -135,10 +128,12 @@
           sinkRecords2Hdfs(path, recs)
         } else {
           val groupedRecords: RDD[(Long, Iterable[String])] =
-            records.zipWithIndex.flatMap { r =>
-              val gid = r._2 / maxLinesPerFile
-              if (gid < groupCount) Some((gid, r._1)) else None
-            }.groupByKey()
+            records.zipWithIndex
+              .flatMap { r =>
+                val gid = r._2 / maxLinesPerFile
+                if (gid < groupCount) Some((gid, r._1)) else None
+              }
+              .groupByKey()
           groupedRecords.foreach { group =>
             val (gid, recs) = group
             val hdfsPath = if (gid == 0) path else withSuffix(path, gid.toString)
@@ -190,7 +185,7 @@
 
   private def sinkRecords2Hdfs(hdfsPath: String, records: Iterable[String]): Unit = {
     try {
-      HdfsUtil.withHdfsFile(hdfsPath, false) { out =>
+      HdfsUtil.withHdfsFile(hdfsPath, appendIfExists = false) { out =>
         records.map { record =>
           out.write((record + "\n").getBytes("utf-8"))
         }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/sink/MongoSink.scala b/measure/src/main/scala/org/apache/griffin/measure/sink/MongoSink.scala
index 20effcd..59be39c 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/sink/MongoSink.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/sink/MongoSink.scala
@@ -27,23 +27,23 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 import org.apache.griffin.measure.utils.TimeUtil
 
-
 /**
-  * sink metric and record to mongo
-  */
+ * sink metric and record to mongo
+ */
 case class MongoSink(
-                      config: Map[String, Any],
-                      metricName: String,
-                      timeStamp: Long,
-                      block: Boolean) extends Sink {
+    config: Map[String, Any],
+    metricName: String,
+    timeStamp: Long,
+    block: Boolean)
+    extends Sink {
 
   MongoConnection.init(config)
 
   val OverTime = "over.time"
   val Retry = "retry"
 
-  val overTime = TimeUtil.milliseconds(config.getString(OverTime, "")).getOrElse(-1L)
-  val retry = config.getInt(Retry, 10)
+  val overTime: Long = TimeUtil.milliseconds(config.getString(OverTime, "")).getOrElse(-1L)
+  val retry: Int = config.getInt(Retry, 10)
 
   val _MetricName = "metricName"
   val _Timestamp = "timestamp"
@@ -63,17 +63,18 @@
     mongoInsert(metrics)
   }
 
-  private val filter = Filters.and(
-    Filters.eq(_MetricName, metricName),
-    Filters.eq(_Timestamp, timeStamp)
-  )
+  private val filter =
+    Filters.and(Filters.eq(_MetricName, metricName), Filters.eq(_Timestamp, timeStamp))
 
   private def mongoInsert(dataMap: Map[String, Any]): Unit = {
     try {
       val update = Updates.set(_Value, dataMap)
       def func(): (Long, Future[UpdateResult]) = {
-        (timeStamp, MongoConnection.getDataCollection.updateOne(
-          filter, update, UpdateOptions().upsert(true)).toFuture)
+        (
+          timeStamp,
+          MongoConnection.getDataCollection
+            .updateOne(filter, update, UpdateOptions().upsert(true))
+            .toFuture)
       }
       if (block) SinkTaskRunner.addBlockTask(func _, retry, overTime)
       else SinkTaskRunner.addNonBlockTask(func _, retry)
@@ -101,7 +102,7 @@
   var dataConf: MongoConf = _
   private var dataCollection: MongoCollection[Document] = _
 
-  def getDataCollection : MongoCollection[Document] = dataCollection
+  def getDataCollection: MongoCollection[Document] = dataCollection
 
   def init(config: Map[String, Any]): Unit = {
     if (!initialed) {
@@ -113,14 +114,12 @@
 
   private def mongoConf(cfg: Map[String, Any]): MongoConf = {
     val url = cfg.getString(Url, "").trim
-    val mongoUrl = if (url.startsWith(_MongoHead)) url else {
-      _MongoHead + url
-    }
-    MongoConf(
-      mongoUrl,
-      cfg.getString(Database, ""),
-      cfg.getString(Collection, "")
-    )
+    val mongoUrl =
+      if (url.startsWith(_MongoHead)) url
+      else {
+        _MongoHead + url
+      }
+    MongoConf(mongoUrl, cfg.getString(Database, ""), cfg.getString(Collection, ""))
   }
   private def mongoCollection(mongoConf: MongoConf): MongoCollection[Document] = {
     val mongoClient: MongoClient = MongoClient(mongoConf.url)
diff --git a/measure/src/main/scala/org/apache/griffin/measure/sink/MultiSinks.scala b/measure/src/main/scala/org/apache/griffin/measure/sink/MultiSinks.scala
index e8afe12..dd45c9c 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/sink/MultiSinks.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/sink/MultiSinks.scala
@@ -20,8 +20,8 @@
 import org.apache.spark.rdd.RDD
 
 /**
-  * sink metric and record in multiple ways
-  */
+ * sink metric and record in multiple ways
+ */
 case class MultiSinks(sinkIter: Iterable[Sink]) extends Sink {
 
   val block: Boolean = false
diff --git a/measure/src/main/scala/org/apache/griffin/measure/sink/Sink.scala b/measure/src/main/scala/org/apache/griffin/measure/sink/Sink.scala
index 247cc1c..6cb6f26 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/sink/Sink.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/sink/Sink.scala
@@ -22,8 +22,8 @@
 import org.apache.griffin.measure.Loggable
 
 /**
-  * sink metric and record
-  */
+ * sink metric and record
+ */
 trait Sink extends Loggable with Serializable {
   val metricName: String
   val timeStamp: Long
diff --git a/measure/src/main/scala/org/apache/griffin/measure/sink/SinkContext.scala b/measure/src/main/scala/org/apache/griffin/measure/sink/SinkContext.scala
index a47135e..2120aaf 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/sink/SinkContext.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/sink/SinkContext.scala
@@ -17,4 +17,8 @@
 
 package org.apache.griffin.measure.sink
 
-case class SinkContext(config: Map[String, Any], metricName: String, timeStamp: Long, block: Boolean)
+case class SinkContext(
+    config: Map[String, Any],
+    metricName: String,
+    timeStamp: Long,
+    block: Boolean)
diff --git a/measure/src/main/scala/org/apache/griffin/measure/sink/SinkFactory.scala b/measure/src/main/scala/org/apache/griffin/measure/sink/SinkFactory.scala
index e7806f4..3deff4d 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/sink/SinkFactory.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/sink/SinkFactory.scala
@@ -24,16 +24,17 @@
 import org.apache.griffin.measure.configuration.enums.SinkType._
 import org.apache.griffin.measure.utils.ParamUtil._
 
-case class SinkFactory(sinkParamIter: Iterable[SinkParam],
-                       metricName: String) extends Loggable with Serializable {
+case class SinkFactory(sinkParamIter: Iterable[SinkParam], metricName: String)
+    extends Loggable
+    with Serializable {
 
   /**
-    * create sink
-    *
-    * @param timeStamp the timestamp of sink
-    * @param block     sink write metric in block or non-block way
-    * @return sink
-    */
+   * create sink
+   *
+   * @param timeStamp the timestamp of sink
+   * @param block     sink write metric in block or non-block way
+   * @return sink
+   */
   def getSinks(timeStamp: Long, block: Boolean): MultiSinks = {
     MultiSinks(sinkParamIter.flatMap(param => getSink(timeStamp, param, block)))
   }
@@ -47,10 +48,10 @@
       case ElasticSearch => Try(ElasticSearchSink(config, metricName, timeStamp, block))
       case MongoDB => Try(MongoSink(config, metricName, timeStamp, block))
       case Custom => Try(getCustomSink(config, metricName, timeStamp, block))
-      case _ => throw new Exception(s"sink type ${sinkType} is not supported!")
+      case _ => throw new Exception(s"sink type $sinkType is not supported!")
     }
     sinkTry match {
-      case Success(sink) if (sink.available) => Some(sink)
+      case Success(sink) if sink.available() => Some(sink)
       case Failure(ex) =>
         error("Failed to get sink", ex)
         None
@@ -58,38 +59,42 @@
   }
 
   /**
-    * Using custom sink
-    *
-    * how it might look in env.json:
-    *
-    * "sinks": [
-    * {
-    * "type": "CUSTOM",
-    * "config": {
-    * "class": "com.yourcompany.griffin.sinks.MySuperSink",
-    * "path": "/Users/Shared"
-    * }
-    * },
-    *
-    */
-  private def getCustomSink(config: Map[String, Any],
-                            metricName: String,
-                            timeStamp: Long,
-                            block: Boolean): Sink = {
+   * Using custom sink
+   *
+   * how it might look in env.json:
+   *
+   * "sinks": [
+   * {
+   * "type": "CUSTOM",
+   * "config": {
+   * "class": "com.yourcompany.griffin.sinks.MySuperSink",
+   * "path": "/Users/Shared"
+   * }
+   * },
+   *
+   */
+  private def getCustomSink(
+      config: Map[String, Any],
+      metricName: String,
+      timeStamp: Long,
+      block: Boolean): Sink = {
     val className = config.getString("class", "")
     val cls = Class.forName(className)
     if (classOf[Sink].isAssignableFrom(cls)) {
-      val method = cls.getDeclaredMethod("apply",
+      val method = cls.getDeclaredMethod(
+        "apply",
         classOf[Map[String, Any]],
         classOf[String],
         classOf[Long],
         classOf[Boolean])
-      method.invoke(
-        null,
-        config,
-        metricName.asInstanceOf[Object],
-        timeStamp.asInstanceOf[Object],
-        block.asInstanceOf[Object]).asInstanceOf[Sink]
+      method
+        .invoke(
+          null,
+          config,
+          metricName.asInstanceOf[Object],
+          timeStamp.asInstanceOf[Object],
+          block.asInstanceOf[Object])
+        .asInstanceOf[Sink]
     } else {
       throw new ClassCastException(s"$className should extend Sink")
     }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/sink/SinkTaskRunner.scala b/measure/src/main/scala/org/apache/griffin/measure/sink/SinkTaskRunner.scala
index cc895c1..a847dd6 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/sink/SinkTaskRunner.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/sink/SinkTaskRunner.scala
@@ -26,10 +26,9 @@
 
 import org.apache.griffin.measure.Loggable
 
-
 /**
-  * sink task runner, to sink metrics in block or non-block mode
-  */
+ * sink task runner, to sink metrics in block or non-block mode
+ */
 object SinkTaskRunner extends Loggable {
 
   import scala.concurrent.ExecutionContext.Implicits.global
@@ -54,38 +53,41 @@
     res.onComplete {
       case Success(value) =>
         val et = new Date().getTime
-        info(s"task ${t} success with (${value}) [ using time ${et - st} ms ]")
+        info(s"task $t success with ($value) [ using time ${et - st} ms ]")
 
       case Failure(e) =>
         val et = new Date().getTime
-        warn(s"task ${t} fails [ using time ${et - st} ms ] : ${e.getMessage}")
+        warn(s"task $t fails [ using time ${et - st} ms ] : ${e.getMessage}")
         if (nextRetry >= 0) {
-          info(s"task ${t} retry [ rest retry count: ${nextRetry} ]")
+          info(s"task $t retry [ rest retry count: $nextRetry ]")
           nonBlockExecute(func, nextRetry)
         } else {
-          error(s"task fails: task ${t} retry ends but fails", e)
+          error(s"task fails: task $t retry ends but fails", e)
         }
     }
   }
 
-  private def blockExecute(func: () => (Long, Future[_]),
-                           retry: Int, waitDuration: Duration): Unit = {
+  @scala.annotation.tailrec
+  private def blockExecute(
+      func: () => (Long, Future[_]),
+      retry: Int,
+      waitDuration: Duration): Unit = {
     val nextRetry = nextRetryCount(retry)
     val st = new Date().getTime
     val (t, res) = func()
     try {
       val value = Await.result(res, waitDuration)
       val et = new Date().getTime
-      info(s"task ${t} success with (${value}) [ using time ${et - st} ms ]")
+      info(s"task $t success with ($value) [ using time ${et - st} ms ]")
     } catch {
       case e: Throwable =>
         val et = new Date().getTime
-        warn(s"task ${t} fails [ using time ${et - st} ms ] : ${e.getMessage}")
+        warn(s"task $t fails [ using time ${et - st} ms ] : ${e.getMessage}")
         if (nextRetry >= 0) {
-          info(s"task ${t} retry [ rest retry count: ${nextRetry} ]")
+          info(s"task $t retry [ rest retry count: $nextRetry ]")
           blockExecute(func, nextRetry, waitDuration)
         } else {
-          error(s"task fails: task ${t} retry ends but fails", e)
+          error(s"task fails: task $t retry ends but fails", e)
         }
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/DQStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/DQStep.scala
index 3df6d28..a6eb95a 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/DQStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/DQStep.scala
@@ -25,19 +25,17 @@
   val name: String
 
   /**
-    * @return execution success
-    */
+   * @return execution success
+   */
   def execute(context: DQContext): Boolean
 
-  def getNames(): Seq[String] = name :: Nil
+  def getNames: Seq[String] = name :: Nil
 
 }
 
 object DQStepStatus extends Enumeration {
-  val PENDING = Value
-  val RUNNING = Value
-  val COMPLETE = Value
-  val FAILED = Value
+  val PENDING: DQStepStatus.Value = Value
+  val RUNNING: DQStepStatus.Value = Value
+  val COMPLETE: DQStepStatus.Value = Value
+  val FAILED: DQStepStatus.Value = Value
 }
-
-
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/SeqDQStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/SeqDQStep.scala
index df1a47a..0eaea64 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/SeqDQStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/SeqDQStep.scala
@@ -20,8 +20,8 @@
 import org.apache.griffin.measure.context.DQContext
 
 /**
-  * sequence of dq steps
-  */
+ * sequence of dq steps
+ */
 case class SeqDQStep(dqSteps: Seq[DQStep]) extends DQStep {
 
   val name: String = ""
@@ -29,15 +29,13 @@
   val details: Map[String, Any] = Map()
 
   /**
-    * @return execution success
-    */
+   * @return execution success
+   */
   def execute(context: DQContext): Boolean = {
-    dqSteps.foldLeft(true) { (ret, dqStep) =>
-      ret && dqStep.execute(context)
-    }
+    dqSteps.forall(dqStep => dqStep.execute(context))
   }
 
-  override def getNames(): Seq[String] = {
+  override def getNames: Seq[String] = {
     dqSteps.foldLeft(Nil: Seq[String]) { (ret, dqStep) =>
       ret ++ dqStep.getNames
     }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/BatchDataSourceStepBuilder.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/BatchDataSourceStepBuilder.scala
index 9905f61..4ad2303 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/BatchDataSourceStepBuilder.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/BatchDataSourceStepBuilder.scala
@@ -21,7 +21,6 @@
 import org.apache.griffin.measure.context.DQContext
 import org.apache.griffin.measure.step.read.ReadStep
 
-
 case class BatchDataSourceStepBuilder() extends DataSourceParamStepBuilder {
 
   def buildReadSteps(context: DQContext, dcParam: DataConnectorParam): Option[ReadStep] = {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/ConstantColumns.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/ConstantColumns.scala
index 7eb8206..20dc7af 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/ConstantColumns.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/ConstantColumns.scala
@@ -18,8 +18,8 @@
 package org.apache.griffin.measure.step.builder
 
 /**
-  * for griffin dsl rules, the constant columns might be used during calculation,
-  */
+ * for griffin dsl rules, the constant columns might be used during calculation,
+ */
 object ConstantColumns {
   val tmst = "__tmst"
   val metric = "__metric"
@@ -33,5 +33,6 @@
 
   val rowNumber = "__rn"
 
-  val columns = List[String](tmst, metric, record, empty, beginTs, endTs, distinct, rowNumber)
+  val columns: List[String] =
+    List[String](tmst, metric, record, empty, beginTs, endTs, distinct, rowNumber)
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/DQStepBuilder.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/DQStepBuilder.scala
index efc4537..1df25c1 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/DQStepBuilder.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/DQStepBuilder.scala
@@ -27,8 +27,8 @@
 import org.apache.griffin.measure.step._
 
 /**
-  * build dq step by param
-  */
+ * build dq step by param
+ */
 trait DQStepBuilder extends Loggable with Serializable {
 
   type ParamType <: Param
@@ -44,14 +44,15 @@
 
 object DQStepBuilder {
 
-  def buildStepOptByDataSourceParam(context: DQContext, dsParam: DataSourceParam
-                                   ): Option[DQStep] = {
+  def buildStepOptByDataSourceParam(
+      context: DQContext,
+      dsParam: DataSourceParam): Option[DQStep] = {
     getDataSourceParamStepBuilder(context.procType)
       .flatMap(_.buildDQStep(context, dsParam))
   }
 
-  private def getDataSourceParamStepBuilder(procType: ProcessType)
-  : Option[DataSourceParamStepBuilder] = {
+  private def getDataSourceParamStepBuilder(
+      procType: ProcessType): Option[DataSourceParamStepBuilder] = {
     procType match {
       case BatchProcessType => Some(BatchDataSourceStepBuilder())
       case StreamingProcessType => Some(StreamingDataSourceStepBuilder())
@@ -59,21 +60,22 @@
     }
   }
 
-  def buildStepOptByRuleParam(context: DQContext, ruleParam: RuleParam
-                             ): Option[DQStep] = {
+  def buildStepOptByRuleParam(context: DQContext, ruleParam: RuleParam): Option[DQStep] = {
     val dslType = ruleParam.getDslType
     val dsNames = context.dataSourceNames
     val funcNames = context.functionNames
     val dqStepOpt = getRuleParamStepBuilder(dslType, dsNames, funcNames)
       .flatMap(_.buildDQStep(context, ruleParam))
-    dqStepOpt.toSeq.flatMap(_.getNames).foreach(name =>
-      context.compileTableRegister.registerTable(name)
-    )
+    dqStepOpt.toSeq
+      .flatMap(_.getNames)
+      .foreach(name => context.compileTableRegister.registerTable(name))
     dqStepOpt
   }
 
-  private def getRuleParamStepBuilder(dslType: DslType, dsNames: Seq[String], funcNames: Seq[String]
-                                     ): Option[RuleParamStepBuilder] = {
+  private def getRuleParamStepBuilder(
+      dslType: DslType,
+      dsNames: Seq[String],
+      funcNames: Seq[String]): Option[RuleParamStepBuilder] = {
     dslType match {
       case SparkSql => Some(SparkSqlDQStepBuilder())
       case DataFrameOpsType => Some(DataFrameOpsDQStepBuilder())
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/DQStepNameGenerator.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/DQStepNameGenerator.scala
index 9774af2..a42f156 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/DQStepNameGenerator.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/DQStepNameGenerator.scala
@@ -24,7 +24,7 @@
   private val head: String = "step"
 
   def genName: String = {
-    s"${head}${increment}"
+    s"$head$increment"
   }
 
   private def increment: Long = {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/DataFrameOpsDQStepBuilder.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/DataFrameOpsDQStepBuilder.scala
index f01f3e4..81df000 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/DataFrameOpsDQStepBuilder.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/DataFrameOpsDQStepBuilder.scala
@@ -28,7 +28,12 @@
     val name = getStepName(ruleParam.getOutDfName())
     val inputDfName = getStepName(ruleParam.getInDfName())
     val transformStep = DataFrameOpsTransformStep(
-      name, inputDfName, ruleParam.getRule, ruleParam.getDetails, None, ruleParam.getCache)
+      name,
+      inputDfName,
+      ruleParam.getRule,
+      ruleParam.getDetails,
+      None,
+      ruleParam.getCache)
     transformStep +: buildDirectWriteSteps(ruleParam)
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/DataSourceParamStepBuilder.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/DataSourceParamStepBuilder.scala
index 01c0a43..baa5639 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/DataSourceParamStepBuilder.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/DataSourceParamStepBuilder.scala
@@ -23,8 +23,8 @@
 import org.apache.griffin.measure.step.read._
 
 /**
-  * build dq step by data source param
-  */
+ * build dq step by data source param
+ */
 trait DataSourceParamStepBuilder extends DQStepBuilder {
 
   type ParamType = DataSourceParam
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/GriffinDslDQStepBuilder.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/GriffinDslDQStepBuilder.scala
index 0b04d44..94c899d 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/GriffinDslDQStepBuilder.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/GriffinDslDQStepBuilder.scala
@@ -23,15 +23,13 @@
 import org.apache.griffin.measure.step.builder.dsl.parser.GriffinDslParser
 import org.apache.griffin.measure.step.builder.dsl.transform.Expr2DQSteps
 
+case class GriffinDslDQStepBuilder(dataSourceNames: Seq[String], functionNames: Seq[String])
+    extends RuleParamStepBuilder {
 
-case class GriffinDslDQStepBuilder(dataSourceNames: Seq[String],
-                                   functionNames: Seq[String]
-                                  ) extends RuleParamStepBuilder {
-
-  val filteredFunctionNames = functionNames.filter { fn =>
+  val filteredFunctionNames: Seq[String] = functionNames.filter { fn =>
     fn.matches("""^[a-zA-Z_]\w*$""")
   }
-  val parser = GriffinDslParser(dataSourceNames, filteredFunctionNames)
+  val parser: GriffinDslParser = GriffinDslParser(dataSourceNames, filteredFunctionNames)
 
   def buildSteps(context: DQContext, ruleParam: RuleParam): Seq[DQStep] = {
     val name = getStepName(ruleParam.getOutDfName())
@@ -42,14 +40,14 @@
       if (result.successful) {
         val expr = result.get
         val expr2DQSteps = Expr2DQSteps(context, expr, ruleParam.replaceOutDfName(name))
-        expr2DQSteps.getDQSteps()
+        expr2DQSteps.getDQSteps
       } else {
-        warn(s"parse rule [ ${rule} ] fails: \n${result}")
+        warn(s"parse rule [ $rule ] fails: \n$result")
         Nil
       }
     } catch {
       case e: Throwable =>
-        error(s"generate rule plan ${name} fails: ${e.getMessage}", e)
+        error(s"generate rule plan $name fails: ${e.getMessage}", e)
         Nil
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/RuleParamStepBuilder.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/RuleParamStepBuilder.scala
index c7f6051..e54d3d4 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/RuleParamStepBuilder.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/RuleParamStepBuilder.scala
@@ -21,11 +21,15 @@
 import org.apache.griffin.measure.configuration.enums.OutputType._
 import org.apache.griffin.measure.context.DQContext
 import org.apache.griffin.measure.step.{DQStep, SeqDQStep}
-import org.apache.griffin.measure.step.write.{DataSourceUpdateWriteStep, MetricWriteStep, RecordWriteStep}
+import org.apache.griffin.measure.step.write.{
+  DataSourceUpdateWriteStep,
+  MetricWriteStep,
+  RecordWriteStep
+}
 
 /**
-  * build dq step by rule param
-  */
+ * build dq step by rule param
+ */
 trait RuleParamStepBuilder extends DQStepBuilder {
 
   type ParamType = RuleParam
@@ -42,17 +46,26 @@
   protected def buildDirectWriteSteps(ruleParam: RuleParam): Seq[DQStep] = {
     val name = getStepName(ruleParam.getOutDfName())
     // metric writer
-    val metricSteps = ruleParam.getOutputOpt(MetricOutputType).map { metric =>
-      MetricWriteStep(metric.getNameOpt.getOrElse(name), name, metric.getFlatten)
-    }.toSeq
+    val metricSteps = ruleParam
+      .getOutputOpt(MetricOutputType)
+      .map { metric =>
+        MetricWriteStep(metric.getNameOpt.getOrElse(name), name, metric.getFlatten)
+      }
+      .toSeq
     // record writer
-    val recordSteps = ruleParam.getOutputOpt(RecordOutputType).map { record =>
-      RecordWriteStep(record.getNameOpt.getOrElse(name), name)
-    }.toSeq
+    val recordSteps = ruleParam
+      .getOutputOpt(RecordOutputType)
+      .map { record =>
+        RecordWriteStep(record.getNameOpt.getOrElse(name), name)
+      }
+      .toSeq
     // update writer
-    val dsCacheUpdateSteps = ruleParam.getOutputOpt(DscUpdateOutputType).map { dsCacheUpdate =>
-      DataSourceUpdateWriteStep(dsCacheUpdate.getNameOpt.getOrElse(""), name)
-    }.toSeq
+    val dsCacheUpdateSteps = ruleParam
+      .getOutputOpt(DscUpdateOutputType)
+      .map { dsCacheUpdate =>
+        DataSourceUpdateWriteStep(dsCacheUpdate.getNameOpt.getOrElse(""), name)
+      }
+      .toSeq
 
     metricSteps ++ recordSteps ++ dsCacheUpdateSteps
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/SparkSqlDQStepBuilder.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/SparkSqlDQStepBuilder.scala
index 63e7358..669001e 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/SparkSqlDQStepBuilder.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/SparkSqlDQStepBuilder.scala
@@ -27,7 +27,11 @@
   def buildSteps(context: DQContext, ruleParam: RuleParam): Seq[DQStep] = {
     val name = getStepName(ruleParam.getOutDfName())
     val transformStep = SparkSqlTransformStep(
-      name, ruleParam.getRule, ruleParam.getDetails, None, ruleParam.getCache)
+      name,
+      ruleParam.getRule,
+      ruleParam.getDetails,
+      None,
+      ruleParam.getCache)
     transformStep +: buildDirectWriteSteps(ruleParam)
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/ClauseExpression.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/ClauseExpression.scala
index 5def949..5afbdcf 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/ClauseExpression.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/ClauseExpression.scala
@@ -17,11 +17,10 @@
 
 package org.apache.griffin.measure.step.builder.dsl.expr
 
-trait ClauseExpression extends Expr {
-}
+trait ClauseExpression extends Expr {}
 
-case class SelectClause(exprs: Seq[Expr], extraConditionOpt: Option[ExtraConditionExpr]
-                       ) extends ClauseExpression {
+case class SelectClause(exprs: Seq[Expr], extraConditionOpt: Option[ExtraConditionExpr])
+    extends ClauseExpression {
 
   addChildren(exprs)
 
@@ -33,8 +32,9 @@
   }
   def coalesceDesc: String = desc
 
-  override def map(func: (Expr) => Expr): SelectClause = {
-    SelectClause(exprs.map(func(_)),
+  override def map(func: Expr => Expr): SelectClause = {
+    SelectClause(
+      exprs.map(func(_)),
       extraConditionOpt.map(func(_).asInstanceOf[ExtraConditionExpr]))
   }
 
@@ -42,7 +42,7 @@
 
 case class FromClause(dataSource: String) extends ClauseExpression {
 
-  def desc: String = s"FROM `${dataSource}`"
+  def desc: String = s"FROM `$dataSource`"
   def coalesceDesc: String = desc
 
 }
@@ -54,36 +54,37 @@
   def desc: String = s"WHERE ${expr.desc}"
   def coalesceDesc: String = s"WHERE ${expr.coalesceDesc}"
 
-  override def map(func: (Expr) => Expr): WhereClause = {
+  override def map(func: Expr => Expr): WhereClause = {
     WhereClause(func(expr))
   }
 
 }
 
-case class GroupbyClause(exprs: Seq[Expr], havingClauseOpt: Option[Expr]) extends ClauseExpression {
+case class GroupbyClause(exprs: Seq[Expr], havingClauseOpt: Option[Expr])
+    extends ClauseExpression {
 
   addChildren(exprs ++ havingClauseOpt.toSeq)
 
   def desc: String = {
     val gbs = exprs.map(_.desc).mkString(", ")
     havingClauseOpt match {
-      case Some(having) => s"GROUP BY ${gbs} HAVING ${having.desc}"
-      case _ => s"GROUP BY ${gbs}"
+      case Some(having) => s"GROUP BY $gbs HAVING ${having.desc}"
+      case _ => s"GROUP BY $gbs"
     }
   }
   def coalesceDesc: String = {
     val gbs = exprs.map(_.desc).mkString(", ")
     havingClauseOpt match {
-      case Some(having) => s"GROUP BY ${gbs} HAVING ${having.coalesceDesc}"
-      case _ => s"GROUP BY ${gbs}"
+      case Some(having) => s"GROUP BY $gbs HAVING ${having.coalesceDesc}"
+      case _ => s"GROUP BY $gbs"
     }
   }
 
   def merge(other: GroupbyClause): GroupbyClause = {
     val newHavingClauseOpt = (havingClauseOpt, other.havingClauseOpt) match {
       case (Some(hc), Some(ohc)) =>
-        val logical1 = LogicalFactorExpr(hc, false, None)
-        val logical2 = LogicalFactorExpr(ohc, false, None)
+        val logical1 = LogicalFactorExpr(hc, withBracket = false, None)
+        val logical2 = LogicalFactorExpr(ohc, withBracket = false, None)
         Some(BinaryLogicalExpr(logical1, ("AND", logical2) :: Nil))
       case (a @ Some(_), _) => a
       case (_, b @ Some(_)) => b
@@ -92,7 +93,7 @@
     GroupbyClause(exprs ++ other.exprs, newHavingClauseOpt)
   }
 
-  override def map(func: (Expr) => Expr): GroupbyClause = {
+  override def map(func: Expr => Expr): GroupbyClause = {
     GroupbyClause(exprs.map(func(_)), havingClauseOpt.map(func(_)))
   }
 
@@ -108,7 +109,7 @@
   }
   def coalesceDesc: String = desc
 
-  override def map(func: (Expr) => Expr): OrderItem = {
+  override def map(func: Expr => Expr): OrderItem = {
     OrderItem(func(expr), orderOpt)
   }
 }
@@ -119,14 +120,14 @@
 
   def desc: String = {
     val obs = items.map(_.desc).mkString(", ")
-    s"ORDER BY ${obs}"
+    s"ORDER BY $obs"
   }
   def coalesceDesc: String = {
     val obs = items.map(_.desc).mkString(", ")
-    s"ORDER BY ${obs}"
+    s"ORDER BY $obs"
   }
 
-  override def map(func: (Expr) => Expr): OrderbyClause = {
+  override def map(func: Expr => Expr): OrderbyClause = {
     OrderbyClause(items.map(func(_).asInstanceOf[OrderItem]))
   }
 }
@@ -137,14 +138,14 @@
 
   def desc: String = {
     val obs = items.map(_.desc).mkString(", ")
-    s"SORT BY ${obs}"
+    s"SORT BY $obs"
   }
   def coalesceDesc: String = {
     val obs = items.map(_.desc).mkString(", ")
-    s"SORT BY ${obs}"
+    s"SORT BY $obs"
   }
 
-  override def map(func: (Expr) => Expr): SortbyClause = {
+  override def map(func: Expr => Expr): SortbyClause = {
     SortbyClause(items.map(func(_).asInstanceOf[OrderItem]))
   }
 }
@@ -156,53 +157,56 @@
   def desc: String = s"LIMIT ${expr.desc}"
   def coalesceDesc: String = s"LIMIT ${expr.coalesceDesc}"
 
-  override def map(func: (Expr) => Expr): LimitClause = {
+  override def map(func: Expr => Expr): LimitClause = {
     LimitClause(func(expr))
   }
 }
 
-case class CombinedClause(selectClause: SelectClause, fromClauseOpt: Option[FromClause],
-                          tails: Seq[ClauseExpression]
-                         ) extends ClauseExpression {
+case class CombinedClause(
+    selectClause: SelectClause,
+    fromClauseOpt: Option[FromClause],
+    tails: Seq[ClauseExpression])
+    extends ClauseExpression {
 
   addChildren({
-    val headClauses: Seq[ClauseExpression] = selectClause +: (fromClauseOpt.toSeq)
+    val headClauses: Seq[ClauseExpression] = selectClause +: fromClauseOpt.toSeq
     headClauses ++ tails
   })
 
   def desc: String = {
     val selectDesc = s"SELECT ${selectClause.desc}"
     val fromDesc = fromClauseOpt.map(_.desc).mkString(" ")
-    val headDesc = s"${selectDesc} ${fromDesc}"
+    val headDesc = s"$selectDesc $fromDesc"
     tails.foldLeft(headDesc) { (head, tail) =>
-      s"${head} ${tail.desc}"
+      s"$head ${tail.desc}"
     }
   }
   def coalesceDesc: String = {
     val selectDesc = s"SELECT ${selectClause.coalesceDesc}"
     val fromDesc = fromClauseOpt.map(_.coalesceDesc).mkString(" ")
-    val headDesc = s"${selectDesc} ${fromDesc}"
+    val headDesc = s"$selectDesc $fromDesc"
     tails.foldLeft(headDesc) { (head, tail) =>
-      s"${head} ${tail.coalesceDesc}"
+      s"$head ${tail.coalesceDesc}"
     }
   }
 
-  override def map(func: (Expr) => Expr): CombinedClause = {
-    CombinedClause(func(selectClause).asInstanceOf[SelectClause],
+  override def map(func: Expr => Expr): CombinedClause = {
+    CombinedClause(
+      func(selectClause).asInstanceOf[SelectClause],
       fromClauseOpt.map(func(_).asInstanceOf[FromClause]),
-      tails.map(func(_).asInstanceOf[ClauseExpression])
-    )
+      tails.map(func(_).asInstanceOf[ClauseExpression]))
   }
 }
 
-case class ProfilingClause(selectClause: SelectClause,
-                           fromClauseOpt: Option[FromClause],
-                           groupbyClauseOpt: Option[GroupbyClause],
-                           preGroupbyClauses: Seq[ClauseExpression],
-                           postGroupbyClauses: Seq[ClauseExpression]
-                          ) extends ClauseExpression {
+case class ProfilingClause(
+    selectClause: SelectClause,
+    fromClauseOpt: Option[FromClause],
+    groupbyClauseOpt: Option[GroupbyClause],
+    preGroupbyClauses: Seq[ClauseExpression],
+    postGroupbyClauses: Seq[ClauseExpression])
+    extends ClauseExpression {
   addChildren({
-    val headClauses: Seq[ClauseExpression] = selectClause +: (fromClauseOpt.toSeq)
+    val headClauses: Seq[ClauseExpression] = selectClause +: fromClauseOpt.toSeq
     groupbyClauseOpt match {
       case Some(gc) => (headClauses ++ preGroupbyClauses) ++ (gc +: postGroupbyClauses)
       case _ => (headClauses ++ preGroupbyClauses) ++ postGroupbyClauses
@@ -215,7 +219,7 @@
     val groupbyDesc = groupbyClauseOpt.map(_.desc).mkString(" ")
     val preDesc = preGroupbyClauses.map(_.desc).mkString(" ")
     val postDesc = postGroupbyClauses.map(_.desc).mkString(" ")
-    s"${selectDesc} ${fromDesc} ${preDesc} ${groupbyDesc} ${postDesc}"
+    s"$selectDesc $fromDesc $preDesc $groupbyDesc $postDesc"
   }
   def coalesceDesc: String = {
     val selectDesc = selectClause.coalesceDesc
@@ -223,16 +227,16 @@
     val groupbyDesc = groupbyClauseOpt.map(_.coalesceDesc).mkString(" ")
     val preDesc = preGroupbyClauses.map(_.coalesceDesc).mkString(" ")
     val postDesc = postGroupbyClauses.map(_.coalesceDesc).mkString(" ")
-    s"${selectDesc} ${fromDesc} ${preDesc} ${groupbyDesc} ${postDesc}"
+    s"$selectDesc $fromDesc $preDesc $groupbyDesc $postDesc"
   }
 
-  override def map(func: (Expr) => Expr): ProfilingClause = {
-    ProfilingClause(func(selectClause).asInstanceOf[SelectClause],
+  override def map(func: Expr => Expr): ProfilingClause = {
+    ProfilingClause(
+      func(selectClause).asInstanceOf[SelectClause],
       fromClauseOpt.map(func(_).asInstanceOf[FromClause]),
       groupbyClauseOpt.map(func(_).asInstanceOf[GroupbyClause]),
       preGroupbyClauses.map(func(_).asInstanceOf[ClauseExpression]),
-      postGroupbyClauses.map(func(_).asInstanceOf[ClauseExpression])
-    )
+      postGroupbyClauses.map(func(_).asInstanceOf[ClauseExpression]))
   }
 }
 
@@ -241,7 +245,7 @@
 
   def desc: String = exprs.map(_.desc).mkString(", ")
   def coalesceDesc: String = exprs.map(_.coalesceDesc).mkString(", ")
-  override def map(func: (Expr) => Expr): UniquenessClause = UniquenessClause(exprs.map(func(_)))
+  override def map(func: Expr => Expr): UniquenessClause = UniquenessClause(exprs.map(func(_)))
 }
 
 case class DistinctnessClause(exprs: Seq[Expr]) extends ClauseExpression {
@@ -249,7 +253,7 @@
 
   def desc: String = exprs.map(_.desc).mkString(", ")
   def coalesceDesc: String = exprs.map(_.coalesceDesc).mkString(", ")
-  override def map(func: (Expr) => Expr) : DistinctnessClause =
+  override def map(func: Expr => Expr): DistinctnessClause =
     DistinctnessClause(exprs.map(func(_)))
 }
 
@@ -258,7 +262,7 @@
 
   def desc: String = exprs.map(_.desc).mkString(", ")
   def coalesceDesc: String = exprs.map(_.coalesceDesc).mkString(", ")
-  override def map(func: (Expr) => Expr): TimelinessClause = TimelinessClause(exprs.map(func(_)))
+  override def map(func: Expr => Expr): TimelinessClause = TimelinessClause(exprs.map(func(_)))
 }
 
 case class CompletenessClause(exprs: Seq[Expr]) extends ClauseExpression {
@@ -266,6 +270,6 @@
 
   def desc: String = exprs.map(_.desc).mkString(", ")
   def coalesceDesc: String = exprs.map(_.coalesceDesc).mkString(", ")
-  override def map(func: (Expr) => Expr): CompletenessClause =
+  override def map(func: Expr => Expr): CompletenessClause =
     CompletenessClause(exprs.map(func(_)))
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/Expr.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/Expr.scala
index 26506c2..19eabe6 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/Expr.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/Expr.scala
@@ -18,8 +18,8 @@
 package org.apache.griffin.measure.step.builder.dsl.expr
 
 /**
-  * expr parsed by griffin dsl
-  */
+ * expr parsed by griffin dsl
+ */
 trait Expr extends TreeNode with ExprTag with Serializable {
 
   def desc: String
@@ -29,6 +29,6 @@
   def extractSelf: Expr = this
 
   // execution
-  def map(func: (Expr) => Expr): Expr = func(this)
+  def map(func: Expr => Expr): Expr = func(this)
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/FunctionExpr.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/FunctionExpr.scala
index 3667f6a..c93b5cf 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/FunctionExpr.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/FunctionExpr.scala
@@ -17,17 +17,20 @@
 
 package org.apache.griffin.measure.step.builder.dsl.expr
 
-case class FunctionExpr(functionName: String, args: Seq[Expr],
-                        extraConditionOpt: Option[ExtraConditionExpr],
-                        aliasOpt: Option[String]
-                       ) extends Expr with AliasableExpr {
+case class FunctionExpr(
+    functionName: String,
+    args: Seq[Expr],
+    extraConditionOpt: Option[ExtraConditionExpr],
+    aliasOpt: Option[String])
+    extends Expr
+    with AliasableExpr {
 
   addChildren(args)
 
   def desc: String = {
     extraConditionOpt match {
-      case Some(cdtn) => s"${functionName}(${cdtn.desc} ${args.map(_.desc).mkString(", ")})"
-      case _ => s"${functionName}(${args.map(_.desc).mkString(", ")})"
+      case Some(cdtn) => s"$functionName(${cdtn.desc} ${args.map(_.desc).mkString(", ")})"
+      case _ => s"$functionName(${args.map(_.desc).mkString(", ")})"
     }
   }
   def coalesceDesc: String = desc
@@ -37,8 +40,11 @@
     } else aliasOpt
   }
 
-  override def map(func: (Expr) => Expr): FunctionExpr = {
-    FunctionExpr(functionName, args.map(func(_)),
-      extraConditionOpt.map(func(_).asInstanceOf[ExtraConditionExpr]), aliasOpt)
+  override def map(func: Expr => Expr): FunctionExpr = {
+    FunctionExpr(
+      functionName,
+      args.map(func(_)),
+      extraConditionOpt.map(func(_).asInstanceOf[ExtraConditionExpr]),
+      aliasOpt)
   }
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/LiteralExpr.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/LiteralExpr.scala
index 24042be..a3c5748 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/LiteralExpr.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/LiteralExpr.scala
@@ -44,7 +44,7 @@
         str.toLong.toString
       }
     } catch {
-      case e: Throwable => throw new Exception(s"${str} is invalid number")
+      case _: Throwable => throw new Exception(s"$str is invalid number")
     }
   }
 }
@@ -53,7 +53,7 @@
   def desc: String = {
     TimeUtil.milliseconds(str) match {
       case Some(t) => t.toString
-      case _ => throw new Exception(s"${str} is invalid time")
+      case _ => throw new Exception(s"$str is invalid time")
     }
   }
 }
@@ -65,7 +65,7 @@
     str match {
       case TrueRegex() => true.toString
       case FalseRegex() => false.toString
-      case _ => throw new Exception(s"${str} is invalid boolean")
+      case _ => throw new Exception(s"$str is invalid boolean")
     }
   }
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/LogicalExpr.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/LogicalExpr.scala
index 0778f0d..5446da0 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/LogicalExpr.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/LogicalExpr.scala
@@ -17,8 +17,7 @@
 
 package org.apache.griffin.measure.step.builder.dsl.expr
 
-trait LogicalExpr extends Expr {
-}
+trait LogicalExpr extends Expr {}
 
 case class InExpr(head: Expr, is: Boolean, range: Seq[Expr]) extends LogicalExpr {
 
@@ -26,14 +25,14 @@
 
   def desc: String = {
     val notStr = if (is) "" else " NOT"
-    s"${head.desc}${notStr} IN (${range.map(_.desc).mkString(", ")})"
+    s"${head.desc}$notStr IN (${range.map(_.desc).mkString(", ")})"
   }
   def coalesceDesc: String = {
     val notStr = if (is) "" else " NOT"
-    s"${head.coalesceDesc}${notStr} IN (${range.map(_.coalesceDesc).mkString(", ")})"
+    s"${head.coalesceDesc}$notStr IN (${range.map(_.coalesceDesc).mkString(", ")})"
   }
 
-  override def map(func: (Expr) => Expr): InExpr = {
+  override def map(func: Expr => Expr): InExpr = {
     InExpr(func(head), is, range.map(func(_)))
   }
 }
@@ -51,7 +50,7 @@
       case first :: second :: _ => s"${first.desc} AND ${second.desc}"
       case _ => throw new Exception("between expression exception: range less than 2")
     }
-    s"${head.desc}${notStr} BETWEEN ${rangeStr}"
+    s"${head.desc}$notStr BETWEEN $rangeStr"
   }
   def coalesceDesc: String = {
     val notStr = if (is) "" else " NOT"
@@ -59,10 +58,10 @@
       case first :: second :: _ => s"${first.coalesceDesc} AND ${second.coalesceDesc}"
       case _ => throw new Exception("between expression exception: range less than 2")
     }
-    s"${head.coalesceDesc}${notStr} BETWEEN ${rangeStr}"
+    s"${head.coalesceDesc}$notStr BETWEEN $rangeStr"
   }
 
-  override def map(func: (Expr) => Expr): BetweenExpr = {
+  override def map(func: Expr => Expr): BetweenExpr = {
     BetweenExpr(func(head), is, range.map(func(_)))
   }
 }
@@ -73,14 +72,14 @@
 
   def desc: String = {
     val notStr = if (is) "" else " NOT"
-    s"${head.desc}${notStr} LIKE ${value.desc}"
+    s"${head.desc}$notStr LIKE ${value.desc}"
   }
   def coalesceDesc: String = {
     val notStr = if (is) "" else " NOT"
-    s"${head.coalesceDesc}${notStr} LIKE ${value.coalesceDesc}"
+    s"${head.coalesceDesc}$notStr LIKE ${value.coalesceDesc}"
   }
 
-  override def map(func: (Expr) => Expr): LikeExpr = {
+  override def map(func: Expr => Expr): LikeExpr = {
     LikeExpr(func(head), is, func(value))
   }
 }
@@ -91,30 +90,29 @@
 
   def desc: String = {
     val notStr = if (is) "" else " NOT"
-    s"${head.desc}${notStr} RLIKE ${value.desc}"
+    s"${head.desc}$notStr RLIKE ${value.desc}"
   }
   def coalesceDesc: String = {
     val notStr = if (is) "" else " NOT"
-    s"${head.coalesceDesc}${notStr} RLIKE ${value.coalesceDesc}"
+    s"${head.coalesceDesc}$notStr RLIKE ${value.coalesceDesc}"
   }
 
-  override def map(func: (Expr) => Expr): RLikeExpr = {
+  override def map(func: Expr => Expr): RLikeExpr = {
     RLikeExpr(func(head), is, func(value))
   }
 }
 
-
 case class IsNullExpr(head: Expr, is: Boolean) extends LogicalExpr {
 
   addChild(head)
 
   def desc: String = {
     val notStr = if (is) "" else " NOT"
-    s"${head.desc} IS${notStr} NULL"
+    s"${head.desc} IS$notStr NULL"
   }
   def coalesceDesc: String = desc
 
-  override def map(func: (Expr) => Expr): IsNullExpr = {
+  override def map(func: Expr => Expr): IsNullExpr = {
     IsNullExpr(func(head), is)
   }
 }
@@ -129,15 +127,16 @@
   }
   def coalesceDesc: String = desc
 
-  override def map(func: (Expr) => Expr): IsNanExpr = {
+  override def map(func: Expr => Expr): IsNanExpr = {
     IsNanExpr(func(head), is)
   }
 }
 
 // -----------
 
-case class LogicalFactorExpr(factor: Expr, withBracket: Boolean, aliasOpt: Option[String]
-                            ) extends LogicalExpr with AliasableExpr {
+case class LogicalFactorExpr(factor: Expr, withBracket: Boolean, aliasOpt: Option[String])
+    extends LogicalExpr
+    with AliasableExpr {
 
   addChild(factor)
 
@@ -149,7 +148,7 @@
     else factor.extractSelf
   }
 
-  override def map(func: (Expr) => Expr): LogicalFactorExpr = {
+  override def map(func: Expr => Expr): LogicalFactorExpr = {
     LogicalFactorExpr(func(factor), withBracket, aliasOpt)
   }
 }
@@ -160,12 +159,12 @@
 
   def desc: String = {
     oprs.foldRight(factor.desc) { (opr, fac) =>
-      s"(${trans(opr)} ${fac})"
+      s"(${trans(opr)} $fac)"
     }
   }
   def coalesceDesc: String = {
     oprs.foldRight(factor.coalesceDesc) { (opr, fac) =>
-      s"(${trans(opr)} ${fac})"
+      s"(${trans(opr)} $fac)"
     }
   }
   private def trans(s: String): String = {
@@ -179,29 +178,29 @@
     else factor.extractSelf
   }
 
-  override def map(func: (Expr) => Expr): UnaryLogicalExpr = {
+  override def map(func: Expr => Expr): UnaryLogicalExpr = {
     UnaryLogicalExpr(oprs, func(factor).asInstanceOf[LogicalExpr])
   }
 }
 
 case class BinaryLogicalExpr(factor: LogicalExpr, tails: Seq[(String, LogicalExpr)])
-  extends LogicalExpr {
+    extends LogicalExpr {
 
   addChildren(factor +: tails.map(_._2))
 
   def desc: String = {
     val res = tails.foldLeft(factor.desc) { (fac, tail) =>
       val (opr, expr) = tail
-      s"${fac} ${trans(opr)} ${expr.desc}"
+      s"$fac ${trans(opr)} ${expr.desc}"
     }
-    if (tails.size <= 0) res else s"${res}"
+    if (tails.size <= 0) res else s"$res"
   }
   def coalesceDesc: String = {
     val res = tails.foldLeft(factor.coalesceDesc) { (fac, tail) =>
       val (opr, expr) = tail
-      s"${fac} ${trans(opr)} ${expr.coalesceDesc}"
+      s"$fac ${trans(opr)} ${expr.coalesceDesc}"
     }
-    if (tails.size <= 0) res else s"${res}"
+    if (tails.size <= 0) res else s"$res"
   }
   private def trans(s: String): String = {
     s match {
@@ -215,8 +214,8 @@
     else factor.extractSelf
   }
 
-  override def map(func: (Expr) => Expr): BinaryLogicalExpr = {
-    BinaryLogicalExpr(func(factor).asInstanceOf[LogicalExpr], tails.map{ pair =>
+  override def map(func: Expr => Expr): BinaryLogicalExpr = {
+    BinaryLogicalExpr(func(factor).asInstanceOf[LogicalExpr], tails.map { pair =>
       (pair._1, func(pair._2).asInstanceOf[LogicalExpr])
     })
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/MathExpr.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/MathExpr.scala
index 2bad687..3a8b290 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/MathExpr.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/MathExpr.scala
@@ -17,11 +17,11 @@
 
 package org.apache.griffin.measure.step.builder.dsl.expr
 
-trait MathExpr extends Expr {
-}
+trait MathExpr extends Expr {}
 
-case class MathFactorExpr(factor: Expr, withBracket: Boolean, aliasOpt: Option[String]
-                         ) extends MathExpr with AliasableExpr {
+case class MathFactorExpr(factor: Expr, withBracket: Boolean, aliasOpt: Option[String])
+    extends MathExpr
+    with AliasableExpr {
 
   addChild(factor)
 
@@ -33,7 +33,7 @@
     else factor.extractSelf
   }
 
-  override def map(func: (Expr) => Expr): MathFactorExpr = {
+  override def map(func: Expr => Expr): MathFactorExpr = {
     MathFactorExpr(func(factor), withBracket, aliasOpt)
   }
 }
@@ -44,12 +44,12 @@
 
   def desc: String = {
     oprs.foldRight(factor.desc) { (opr, fac) =>
-      s"(${opr}${fac})"
+      s"($opr$fac)"
     }
   }
   def coalesceDesc: String = {
     oprs.foldRight(factor.coalesceDesc) { (opr, fac) =>
-      s"(${opr}${fac})"
+      s"($opr$fac)"
     }
   }
   override def extractSelf: Expr = {
@@ -57,7 +57,7 @@
     else factor.extractSelf
   }
 
-  override def map(func: (Expr) => Expr): UnaryMathExpr = {
+  override def map(func: Expr => Expr): UnaryMathExpr = {
     UnaryMathExpr(oprs, func(factor).asInstanceOf[MathExpr])
   }
 }
@@ -69,24 +69,24 @@
   def desc: String = {
     val res = tails.foldLeft(factor.desc) { (fac, tail) =>
       val (opr, expr) = tail
-      s"${fac} ${opr} ${expr.desc}"
+      s"$fac $opr ${expr.desc}"
     }
-    if (tails.size <= 0) res else s"${res}"
+    if (tails.size <= 0) res else s"$res"
   }
   def coalesceDesc: String = {
     val res = tails.foldLeft(factor.coalesceDesc) { (fac, tail) =>
       val (opr, expr) = tail
-      s"${fac} ${opr} ${expr.coalesceDesc}"
+      s"$fac $opr ${expr.coalesceDesc}"
     }
-    if (tails.size <= 0) res else s"${res}"
+    if (tails.size <= 0) res else s"$res"
   }
   override def extractSelf: Expr = {
     if (tails.nonEmpty) this
     else factor.extractSelf
   }
 
-  override def map(func: (Expr) => Expr): BinaryMathExpr = {
-    BinaryMathExpr(func(factor).asInstanceOf[MathExpr], tails.map{ pair =>
+  override def map(func: Expr => Expr): BinaryMathExpr = {
+    BinaryMathExpr(func(factor).asInstanceOf[MathExpr], tails.map { pair =>
       (pair._1, func(pair._2).asInstanceOf[MathExpr])
     })
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/SelectExpr.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/SelectExpr.scala
index 38324e6..9d3d2f5 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/SelectExpr.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/SelectExpr.scala
@@ -22,12 +22,12 @@
 }
 
 case class DataSourceHeadExpr(name: String) extends HeadExpr {
-  def desc: String = s"`${name}`"
+  def desc: String = s"`$name`"
   def coalesceDesc: String = desc
 }
 
 case class FieldNameHeadExpr(field: String) extends HeadExpr {
-  def desc: String = s"`${field}`"
+  def desc: String = s"`$field`"
   def coalesceDesc: String = desc
   override def alias: Option[String] = Some(field)
 }
@@ -45,15 +45,14 @@
   def coalesceDesc: String = expr.coalesceDesc
   override def alias: Option[String] = Some(expr.desc)
 
-  override def map(func: (Expr) => Expr): OtherHeadExpr = {
+  override def map(func: Expr => Expr): OtherHeadExpr = {
     OtherHeadExpr(func(expr))
   }
 }
 
 // -------------
 
-trait SelectExpr extends Expr with AliasableExpr {
-}
+trait SelectExpr extends Expr with AliasableExpr {}
 
 case class AllFieldsSelectExpr() extends SelectExpr {
   def desc: String = ".*"
@@ -62,7 +61,7 @@
 }
 
 case class FieldSelectExpr(field: String) extends SelectExpr {
-  def desc: String = s".`${field}`"
+  def desc: String = s".`$field`"
   def coalesceDesc: String = desc
   override def alias: Option[String] = Some(field)
 }
@@ -75,7 +74,7 @@
   def coalesceDesc: String = desc
   def alias: Option[String] = Some(index.desc)
 
-  override def map(func: (Expr) => Expr): IndexSelectExpr = {
+  override def map(func: Expr => Expr): IndexSelectExpr = {
     IndexSelectExpr(func(index))
   }
 }
@@ -88,7 +87,7 @@
   def coalesceDesc: String = desc
   def alias: Option[String] = Some(functionName)
 
-  override def map(func: (Expr) => Expr): FunctionSelectExpr = {
+  override def map(func: Expr => Expr): FunctionSelectExpr = {
     FunctionSelectExpr(functionName, args.map(func(_)))
   }
 }
@@ -96,7 +95,7 @@
 // -------------
 
 case class SelectionExpr(head: HeadExpr, selectors: Seq[SelectExpr], aliasOpt: Option[String])
-  extends SelectExpr {
+    extends SelectExpr {
 
   addChildren(head +: selectors)
 
@@ -105,27 +104,29 @@
       sel match {
         case FunctionSelectExpr(funcName, args) =>
           val nargs = hd +: args.map(_.desc)
-          s"${funcName}(${nargs.mkString(", ")})"
-        case _ => s"${hd}${sel.desc}"
+          s"$funcName(${nargs.mkString(", ")})"
+        case _ => s"$hd${sel.desc}"
       }
     }
   }
   def coalesceDesc: String = {
     selectors.lastOption match {
       case None => desc
-      case Some(sel: FunctionSelectExpr) => desc
-      case _ => s"coalesce(${desc}, '')"
+      case Some(_: FunctionSelectExpr) => desc
+      case _ => s"coalesce($desc, '')"
     }
   }
   def alias: Option[String] = {
     if (aliasOpt.isEmpty) {
       val aliasSeq = (head +: selectors).flatMap(_.alias)
-      if (aliasSeq.size > 0) Some(aliasSeq.mkString("_")) else None
+      if (aliasSeq.nonEmpty) Some(aliasSeq.mkString("_")) else None
     } else aliasOpt
   }
 
-  override def map(func: (Expr) => Expr): SelectionExpr = {
-    SelectionExpr(func(head).asInstanceOf[HeadExpr],
-      selectors.map(func(_).asInstanceOf[SelectExpr]), aliasOpt)
+  override def map(func: Expr => Expr): SelectionExpr = {
+    SelectionExpr(
+      func(head).asInstanceOf[HeadExpr],
+      selectors.map(func(_).asInstanceOf[SelectExpr]),
+      aliasOpt)
   }
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/TreeNode.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/TreeNode.scala
index e6ee410..414c771 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/TreeNode.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/expr/TreeNode.scala
@@ -21,39 +21,35 @@
 
 trait TreeNode extends Serializable {
 
-  var children = Seq[TreeNode]()
+  var children: Seq[TreeNode] = Seq[TreeNode]()
 
-  def addChild(expr: TreeNode) : Unit = { children :+= expr }
-  def addChildren(exprs: Seq[TreeNode]) : Unit = { children ++= exprs }
+  def addChild(expr: TreeNode): Unit = { children :+= expr }
+  def addChildren(exprs: Seq[TreeNode]): Unit = { children ++= exprs }
 
-  def preOrderTraverseDepthFirst[T, A <: TreeNode](z: T)
-                                                  (seqOp: (A, T) => T, combOp: (T, T) => T)
-                                                  (implicit tag: ClassTag[A]): T = {
+  def preOrderTraverseDepthFirst[T, A <: TreeNode](z: T)(seqOp: (A, T) => T, combOp: (T, T) => T)(
+      implicit tag: ClassTag[A]): T = {
 
     val clazz = tag.runtimeClass
-    if(clazz.isAssignableFrom(this.getClass)) {
+    if (clazz.isAssignableFrom(this.getClass)) {
       val tv = seqOp(this.asInstanceOf[A], z)
       children.foldLeft(combOp(z, tv)) { (ov, tn) =>
         combOp(ov, tn.preOrderTraverseDepthFirst(z)(seqOp, combOp))
       }
-    }
-    else {
+    } else {
       z
     }
 
   }
-  def postOrderTraverseDepthFirst[T, A <: TreeNode](z: T)
-                                                   (seqOp: (A, T) => T, combOp: (T, T) => T)
-                                                   (implicit tag: ClassTag[A]): T = {
+  def postOrderTraverseDepthFirst[T, A <: TreeNode](
+      z: T)(seqOp: (A, T) => T, combOp: (T, T) => T)(implicit tag: ClassTag[A]): T = {
 
     val clazz = tag.runtimeClass
-    if(clazz.isAssignableFrom(this.getClass)) {
+    if (clazz.isAssignableFrom(this.getClass)) {
       val cv = children.foldLeft(z) { (ov, tn) =>
         combOp(ov, tn.postOrderTraverseDepthFirst(z)(seqOp, combOp))
       }
       combOp(z, seqOp(this.asInstanceOf[A], cv))
-    }
-    else {
+    } else {
       z
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/parser/BasicParser.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/parser/BasicParser.scala
index 6c43de0..4fd778e 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/parser/BasicParser.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/parser/BasicParser.scala
@@ -21,10 +21,9 @@
 
 import org.apache.griffin.measure.step.builder.dsl.expr._
 
-
 /**
-  * basic parser for sql like syntax
-  */
+ * basic parser for sql like syntax
+ */
 trait BasicParser extends JavaTokenParsers with Serializable {
 
   val dataSourceNames: Seq[String]
@@ -40,79 +39,82 @@
 
   // scalastyle:off
   /**
-    * BNF for basic parser
-    *
-    * -- literal --
-    * <literal> ::= <literal-string> | <literal-number> | <literal-time> | <literal-boolean> | <literal-null> | <literal-nan>
-    * <literal-string> ::= <any-string>
-    * <literal-number> ::= <integer> | <double>
-    * <literal-time> ::= <integer> ("d"|"h"|"m"|"s"|"ms")
-    * <literal-boolean> ::= true | false
-    * <literal-null> ::= null
-    * <literal-nan> ::= nan
-    *
-    * -- selection --
-    * <selection> ::= <selection-head> [ <field-sel> | <index-sel> | <function-sel> ]* [<as-alias>]?
-    * <selection-head> ::= ("data source name registered") | <function> | <field-name> | <all-selection>
-    * <field-sel> ::= "." <field-name> | "[" <quote-field-name> "]"
-    * <index-sel> ::= "[" <arg> "]"
-    * <function-sel> ::= "." <function-name> "(" [<arg>]? [, <arg>]* ")"
-    * <arg> ::= <math-expr>
-    *
-    * -- as alias --
-    * <as-alias> ::= <as> <field-name>
-    *
-    * -- math expr --
-    * <math-factor> ::= <literal> | <function> | <selection> | "(" <math-expr> ")" [<as-alias>]?
-    * <unary-math-expr> ::= [<unary-opr>]* <math-factor>
-    * <binary-math-expr> ::= <unary-math-expr> [<binary-opr> <unary-math-expr>]+
-    * <math-expr> ::= <binary-math-expr>
-    *
-    * -- logical expr --
-    * <in-expr> ::= <math-expr> [<not>]? <in> <range-expr>
-    * <between-expr> ::= <math-expr> [<not>]? <between> (<math-expr> <and> <math-expr> | <range-expr>)
-    * <range-expr> ::= "(" [<math-expr>]? [, <math-expr>]+ ")"
-    * <like-expr> ::= <math-expr> [<not>]? <like> <math-expr>
-    * <rlike-expr> ::= <math-expr> [<not>]? <rlike> <math-expr>
-    * <is-null-expr> ::= <math-expr> <is> [<not>]? <null>
-    * <is-nan-expr> ::= <math-expr> <is> [<not>]? <nan>
-    *
-    * <logical-factor> ::= <math-expr> | <in-expr> | <between-expr> | <like-expr> | <is-null-expr> | <is-nan-expr> | "(" <logical-expr> ")" [<as-alias>]?
-    * <unary-logical-expr> ::= [<unary-logical-opr>]* <logical-factor>
-    * <binary-logical-expr> ::= <unary-logical-expr> [<binary-logical-opr> <unary-logical-expr>]+
-    * <logical-expr> ::= <binary-logical-expr>
-    *
-    * -- expression --
-    * <expr> = <math-expr> | <logical-expr>
-    *
-    * -- function expr --
-    * <function> ::= <function-name> "(" [<arg>] [, <arg>]+ ")" [<as-alias>]?
-    * <function-name> ::= ("function name registered")
-    * <arg> ::= <expr>
-    *
-    * -- clauses --
-    * <select-clause> = <expr> [, <expr>]*
-    * <where-clause> = <where> <expr>
-    * <from-clause> = <from> ("data source name registered")
-    * <having-clause> = <having> <expr>
-    * <groupby-clause> = <group> <by> <expr> [ <having-clause> ]?
-    * <orderby-item> = <expr> [ <DESC> ]?
-    * <orderby-clause> = <order> <by> <orderby-item> [ , <orderby-item> ]*
-    * <limit-clause> = <limit> <expr>
-    *
-    * -- combined clauses --
-    * <combined-clauses> = <select-clause> [ <from-clause> ]+ [ <where-clause> ]+ [ <groupby-clause> ]+ [ <orderby-clause> ]+ [ <limit-clause> ]+
-    */
-
+   * BNF for basic parser
+   *
+   * -- literal --
+   * <literal> ::= <literal-string> | <literal-number> | <literal-time> | <literal-boolean> | <literal-null> | <literal-nan>
+   * <literal-string> ::= <any-string>
+   * <literal-number> ::= <integer> | <double>
+   * <literal-time> ::= <integer> ("d"|"h"|"m"|"s"|"ms")
+   * <literal-boolean> ::= true | false
+   * <literal-null> ::= null
+   * <literal-nan> ::= nan
+   *
+   * -- selection --
+   * <selection> ::= <selection-head> [ <field-sel> | <index-sel> | <function-sel> ]* [<as-alias>]?
+   * <selection-head> ::= ("data source name registered") | <function> | <field-name> | <all-selection>
+   * <field-sel> ::= "." <field-name> | "[" <quote-field-name> "]"
+   * <index-sel> ::= "[" <arg> "]"
+   * <function-sel> ::= "." <function-name> "(" [<arg>]? [, <arg>]* ")"
+   * <arg> ::= <math-expr>
+   *
+   * -- as alias --
+   * <as-alias> ::= <as> <field-name>
+   *
+   * -- math expr --
+   * <math-factor> ::= <literal> | <function> | <selection> | "(" <math-expr> ")" [<as-alias>]?
+   * <unary-math-expr> ::= [<unary-opr>]* <math-factor>
+   * <binary-math-expr> ::= <unary-math-expr> [<binary-opr> <unary-math-expr>]+
+   * <math-expr> ::= <binary-math-expr>
+   *
+   * -- logical expr --
+   * <in-expr> ::= <math-expr> [<not>]? <in> <range-expr>
+   * <between-expr> ::= <math-expr> [<not>]? <between> (<math-expr> <and> <math-expr> | <range-expr>)
+   * <range-expr> ::= "(" [<math-expr>]? [, <math-expr>]+ ")"
+   * <like-expr> ::= <math-expr> [<not>]? <like> <math-expr>
+   * <rlike-expr> ::= <math-expr> [<not>]? <rlike> <math-expr>
+   * <is-null-expr> ::= <math-expr> <is> [<not>]? <null>
+   * <is-nan-expr> ::= <math-expr> <is> [<not>]? <nan>
+   *
+   * <logical-factor> ::= <math-expr> | <in-expr> | <between-expr> | <like-expr> | <is-null-expr> | <is-nan-expr> | "(" <logical-expr> ")" [<as-alias>]?
+   * <unary-logical-expr> ::= [<unary-logical-opr>]* <logical-factor>
+   * <binary-logical-expr> ::= <unary-logical-expr> [<binary-logical-opr> <unary-logical-expr>]+
+   * <logical-expr> ::= <binary-logical-expr>
+   *
+   * -- expression --
+   * <expr> = <math-expr> | <logical-expr>
+   *
+   * -- function expr --
+   * <function> ::= <function-name> "(" [<arg>] [, <arg>]+ ")" [<as-alias>]?
+   * <function-name> ::= ("function name registered")
+   * <arg> ::= <expr>
+   *
+   * -- clauses --
+   * <select-clause> = <expr> [, <expr>]*
+   * <where-clause> = <where> <expr>
+   * <from-clause> = <from> ("data source name registered")
+   * <having-clause> = <having> <expr>
+   * <groupby-clause> = <group> <by> <expr> [ <having-clause> ]?
+   * <orderby-item> = <expr> [ <DESC> ]?
+   * <orderby-clause> = <order> <by> <orderby-item> [ , <orderby-item> ]*
+   * <limit-clause> = <limit> <expr>
+   *
+   * -- combined clauses --
+   * <combined-clauses> = <select-clause> [ <from-clause> ]+ [ <where-clause> ]+ [ <groupby-clause> ]+ [ <orderby-clause> ]+ [ <limit-clause> ]+
+   */
   protected def genDataSourceNamesParser(names: Seq[String]): Parser[String] = {
-    names.reverse.map {
-      fn => s"""(?i)`${fn}`|${fn}""".r: Parser[String]
-    }.reduce(_ | _)
+    names.reverse
+      .map { fn =>
+        s"""(?i)`$fn`|$fn""".r: Parser[String]
+      }
+      .reduce(_ | _)
   }
   protected def genFunctionNamesParser(names: Seq[String]): Parser[String] = {
-    names.reverse.map {
-      fn => s"""(?i)${fn}""".r: Parser[String]
-    }.reduce(_ | _)
+    names.reverse
+      .map { fn =>
+        s"""(?i)$fn""".r: Parser[String]
+      }
+      .reduce(_ | _)
   }
 
   object Literal {
@@ -123,7 +125,7 @@
 
   object Operator {
     val MATH_UNARY: Parser[String] = "+" | "-"
-    val MATH_BINARIES: Seq[Parser[String]] = Seq(("*" | "/" | "%"), ("+" | "-"))
+    val MATH_BINARIES: Seq[Parser[String]] = Seq("*" | "/" | "%", "+" | "-")
 
     val NOT: Parser[String] = """(?i)not\s""".r | "!"
     val AND: Parser[String] = """(?i)and\s""".r | "&&"
@@ -136,7 +138,7 @@
     val RLIKE: Parser[String] = """(?i)rlike\s""".r
     val COMPARE: Parser[String] = "=" | "!=" | "<>" | "<=" | ">=" | "<" | ">"
     val LOGICAL_UNARY: Parser[String] = NOT
-    val LOGICAL_BINARIES: Seq[Parser[String]] = Seq((COMPARE), (AND), (OR))
+    val LOGICAL_BINARIES: Seq[Parser[String]] = Seq(COMPARE, AND, OR)
 
     val LSQBR: Parser[String] = "["
     val RSQBR: Parser[String] = "]"
@@ -168,7 +170,7 @@
   import Operator._
 
   object Strings {
-    def AnyString: Parser[String] = """"(?:\"|[^\"])*"""".r | """'(?:\'|[^'])*'""".r
+    def AnyString: Parser[String] = """"(?:"|[^"])*"""".r | """'(?:'|[^'])*'""".r
     def SimpleTableFieldName: Parser[String] = """[a-zA-Z_]\w*""".r
     def UnQuoteTableFieldName: Parser[String] = """`(?:[\\][`]|[^`])*`""".r
 //    def FieldName: Parser[String] = UnQuoteTableFieldName | SimpleTableFieldName
@@ -185,118 +187,127 @@
   import Strings._
 
   /**
-    * -- literal --
-    * <literal> ::= <literal-string> | <literal-number> | <literal-time> | <literal-boolean> | <literal-null> | <literal-nan>
-    * <literal-string> ::= <any-string>
-    * <literal-number> ::= <integer> | <double>
-    * <literal-time> ::= <integer> ("d"|"h"|"m"|"s"|"ms")
-    * <literal-boolean> ::= true | false
-    * <literal-null> ::= null
-    * <literal-nan> ::= nan
-    */
-  def literal: Parser[LiteralExpr] = literalNull | literalNan | literalBoolean | literalString | literalTime | literalNumber
-  def literalNull: Parser[LiteralNullExpr] = NULL ^^ { LiteralNullExpr(_) }
-  def literalNan: Parser[LiteralNanExpr] = NAN ^^ { LiteralNanExpr(_) }
-  def literalString: Parser[LiteralStringExpr] = AnyString ^^ { LiteralStringExpr(_) }
-  def literalNumber: Parser[LiteralNumberExpr] = (DoubleNumber | IntegerNumber) ^^ { LiteralNumberExpr(_) }
-  def literalTime: Parser[LiteralTimeExpr] = TimeString ^^ { LiteralTimeExpr(_) }
-  def literalBoolean: Parser[LiteralBooleanExpr] = BooleanString ^^ { LiteralBooleanExpr(_) }
+   * -- literal --
+   * <literal> ::= <literal-string> | <literal-number> | <literal-time> | <literal-boolean> | <literal-null> | <literal-nan>
+   * <literal-string> ::= <any-string>
+   * <literal-number> ::= <integer> | <double>
+   * <literal-time> ::= <integer> ("d"|"h"|"m"|"s"|"ms")
+   * <literal-boolean> ::= true | false
+   * <literal-null> ::= null
+   * <literal-nan> ::= nan
+   */
+  def literal: Parser[LiteralExpr] =
+    literalNull | literalNan | literalBoolean | literalString | literalTime | literalNumber
+  def literalNull: Parser[LiteralNullExpr] = NULL ^^ { LiteralNullExpr }
+  def literalNan: Parser[LiteralNanExpr] = NAN ^^ { LiteralNanExpr }
+  def literalString: Parser[LiteralStringExpr] = AnyString ^^ { LiteralStringExpr }
+  def literalNumber: Parser[LiteralNumberExpr] = (DoubleNumber | IntegerNumber) ^^ {
+    LiteralNumberExpr
+  }
+  def literalTime: Parser[LiteralTimeExpr] = TimeString ^^ { LiteralTimeExpr }
+  def literalBoolean: Parser[LiteralBooleanExpr] = BooleanString ^^ { LiteralBooleanExpr }
 
   /**
-    * -- selection --
-    * <selection> ::= <selection-head> [ <field-sel> | <index-sel> | <function-sel> ]* [<as-alias>]?
-    * <selection-head> ::= ("data source name registered") | <function> | <field-name> | <all-selection>
-    * <field-sel> ::= "." <field-name> | "[" <quote-field-name> "]"
-    * <index-sel> ::= "[" <arg> "]"
-    * <function-sel> ::= "." <function-name> "(" [<arg>]? [, <arg>]* ")"
-    * <arg> ::= <math-expr>
-    */
-
+   * -- selection --
+   * <selection> ::= <selection-head> [ <field-sel> | <index-sel> | <function-sel> ]* [<as-alias>]?
+   * <selection-head> ::= ("data source name registered") | <function> | <field-name> | <all-selection>
+   * <field-sel> ::= "." <field-name> | "[" <quote-field-name> "]"
+   * <index-sel> ::= "[" <arg> "]"
+   * <function-sel> ::= "." <function-name> "(" [<arg>]? [, <arg>]* ")"
+   * <arg> ::= <math-expr>
+   */
   def selection: Parser[SelectionExpr] = selectionHead ~ rep(selector) ~ opt(asAlias) ^^ {
     case head ~ sels ~ aliasOpt => SelectionExpr(head, sels, aliasOpt)
   }
-  def selectionHead: Parser[HeadExpr] = DataSourceName ^^ {
-    ds => DataSourceHeadExpr(trim(ds))
-  } | function ^^ {
-    OtherHeadExpr(_)
-  } | SimpleTableFieldName ^^ {
-    FieldNameHeadExpr(_)
-  } | UnQuoteTableFieldName ^^ { s =>
-    FieldNameHeadExpr(trim(s))
-  } | ALLSL ^^ { _ =>
-    AllSelectHeadExpr()
-  }
+  def selectionHead: Parser[HeadExpr] =
+    DataSourceName ^^ { ds =>
+      DataSourceHeadExpr(trim(ds))
+    } | function ^^ {
+      OtherHeadExpr(_)
+    } | SimpleTableFieldName ^^ {
+      FieldNameHeadExpr
+    } | UnQuoteTableFieldName ^^ { s =>
+      FieldNameHeadExpr(trim(s))
+    } | ALLSL ^^ { _ =>
+      AllSelectHeadExpr()
+    }
   def selector: Parser[SelectExpr] = functionSelect | allFieldsSelect | fieldSelect | indexSelect
-  def allFieldsSelect: Parser[AllFieldsSelectExpr] = DOT ~> ALLSL ^^ { _ => AllFieldsSelectExpr() }
-  def fieldSelect: Parser[FieldSelectExpr] = DOT ~> (
-    SimpleTableFieldName ^^ {
-      FieldSelectExpr(_)
+  def allFieldsSelect: Parser[AllFieldsSelectExpr] = DOT ~> ALLSL ^^ { _ =>
+    AllFieldsSelectExpr()
+  }
+  def fieldSelect: Parser[FieldSelectExpr] =
+    DOT ~> (SimpleTableFieldName ^^ {
+      FieldSelectExpr
     } | UnQuoteTableFieldName ^^ { s =>
       FieldSelectExpr(trim(s))
     })
-  def indexSelect: Parser[IndexSelectExpr] = LSQBR ~> argument <~ RSQBR ^^ { IndexSelectExpr(_) }
-  def functionSelect: Parser[FunctionSelectExpr] = DOT ~ FunctionName ~ LBR ~ repsep(argument, COMMA) ~ RBR ^^ {
-    case _ ~ name ~ _ ~ args ~ _ => FunctionSelectExpr(name, args)
-  }
+  def indexSelect: Parser[IndexSelectExpr] = LSQBR ~> argument <~ RSQBR ^^ { IndexSelectExpr }
+  def functionSelect: Parser[FunctionSelectExpr] =
+    DOT ~ FunctionName ~ LBR ~ repsep(argument, COMMA) ~ RBR ^^ {
+      case _ ~ name ~ _ ~ args ~ _ => FunctionSelectExpr(name, args)
+    }
 
   /**
-    * -- as alias --
-    * <as-alias> ::= <as> <field-name>
-    */
-  def asAlias: Parser[String] = AS ~> (SimpleTableFieldName | UnQuoteTableFieldName ^^ { trim(_) })
+   * -- as alias --
+   * <as-alias> ::= <as> <field-name>
+   */
+  def asAlias: Parser[String] = AS ~> (SimpleTableFieldName | UnQuoteTableFieldName ^^ { trim })
 
   /**
-    * -- math expr --
-    * <math-factor> ::= <literal> | <function> | <selection> | "(" <math-expr> ")" [<as-alias>]?
-    * <unary-math-expr> ::= [<unary-opr>]* <math-factor>
-    * <binary-math-expr> ::= <unary-math-expr> [<binary-opr> <unary-math-expr>]+
-    * <math-expr> ::= <binary-math-expr>
-    */
-
-  def mathFactor: Parser[MathExpr] = (literal | function | selection) ^^ {
-    MathFactorExpr(_, false, None)
-  } | LBR ~ mathExpression ~ RBR ~ opt(asAlias) ^^ {
-    case _ ~ expr ~ _ ~ aliasOpt => MathFactorExpr(expr, true, aliasOpt)
-  }
+   * -- math expr --
+   * <math-factor> ::= <literal> | <function> | <selection> | "(" <math-expr> ")" [<as-alias>]?
+   * <unary-math-expr> ::= [<unary-opr>]* <math-factor>
+   * <binary-math-expr> ::= <unary-math-expr> [<binary-opr> <unary-math-expr>]+
+   * <math-expr> ::= <binary-math-expr>
+   */
+  def mathFactor: Parser[MathExpr] =
+    (literal | function | selection) ^^ {
+      MathFactorExpr(_, withBracket = false, None)
+    } | LBR ~ mathExpression ~ RBR ~ opt(asAlias) ^^ {
+      case _ ~ expr ~ _ ~ aliasOpt => MathFactorExpr(expr, withBracket = true, aliasOpt)
+    }
   def unaryMathExpression: Parser[MathExpr] = rep(MATH_UNARY) ~ mathFactor ^^ {
     case Nil ~ a => a
     case list ~ a => UnaryMathExpr(list, a)
   }
   def binaryMathExpressions: Seq[Parser[MathExpr]] =
-    MATH_BINARIES.foldLeft(List[Parser[MathExpr]](unaryMathExpression)) { (parsers, binaryParser) =>
-      val pre = parsers.headOption.orNull
-      val cur = pre ~ rep(binaryParser ~ pre) ^^ {
-        case a ~ Nil => a
-        case a ~ list => BinaryMathExpr(a, list.map(c => (c._1, c._2)))
-      }
-      cur :: parsers
+    MATH_BINARIES.foldLeft(List[Parser[MathExpr]](unaryMathExpression)) {
+      (parsers, binaryParser) =>
+        val pre = parsers.headOption.orNull
+        val cur = pre ~ rep(binaryParser ~ pre) ^^ {
+          case a ~ Nil => a
+          case a ~ list => BinaryMathExpr(a, list.map(c => (c._1, c._2)))
+        }
+        cur :: parsers
     }
   def mathExpression: Parser[MathExpr] = binaryMathExpressions.headOption.orNull
 
   /**
-    * -- logical expr --
-    * <in-expr> ::= <math-expr> [<not>]? <in> <range-expr>
-    * <between-expr> ::= <math-expr> [<not>]? <between> (<math-expr> <and> <math-expr> | <range-expr>)
-    * <range-expr> ::= "(" [<math-expr>]? [, <math-expr>]+ ")"
-    * <like-expr> ::= <math-expr> [<not>]? <like> <math-expr>
-    * <rlike-expr> ::= <math-expr> [<not>]? <rlike> <math-expr>
-    * <is-null-expr> ::= <math-expr> <is> [<not>]? <null>
-    * <is-nan-expr> ::= <math-expr> <is> [<not>]? <nan>
-    *
-    * <logical-factor> ::= <math-expr> | <in-expr> | <between-expr> | <like-expr> | <is-null-expr> | <is-nan-expr> | "(" <logical-expr> ")" [<as-alias>]?
-    * <unary-logical-expr> ::= [<unary-logical-opr>]* <logical-factor>
-    * <binary-logical-expr> ::= <unary-logical-expr> [<binary-logical-opr> <unary-logical-expr>]+
-    * <logical-expr> ::= <binary-logical-expr>
-    */
-
-  def inExpr: Parser[LogicalExpr] = mathExpression ~ opt(NOT) ~ IN ~ LBR ~ repsep(mathExpression, COMMA) ~ RBR ^^ {
-    case head ~ notOpt ~ _ ~ _ ~ list ~ _ => InExpr(head, notOpt.isEmpty, list)
-  }
-  def betweenExpr: Parser[LogicalExpr] = mathExpression ~ opt(NOT) ~ BETWEEN ~ LBR ~ repsep(mathExpression, COMMA) ~ RBR ^^ {
-    case head ~ notOpt ~ _ ~ _ ~ list ~ _ => BetweenExpr(head, notOpt.isEmpty, list)
-  } | mathExpression ~ opt(NOT) ~ BETWEEN ~ mathExpression ~ AND_ONLY ~ mathExpression ^^ {
-    case head ~ notOpt ~ _ ~ first ~ _ ~ second => BetweenExpr(head, notOpt.isEmpty, Seq(first, second))
-  }
+   * -- logical expr --
+   * <in-expr> ::= <math-expr> [<not>]? <in> <range-expr>
+   * <between-expr> ::= <math-expr> [<not>]? <between> (<math-expr> <and> <math-expr> | <range-expr>)
+   * <range-expr> ::= "(" [<math-expr>]? [, <math-expr>]+ ")"
+   * <like-expr> ::= <math-expr> [<not>]? <like> <math-expr>
+   * <rlike-expr> ::= <math-expr> [<not>]? <rlike> <math-expr>
+   * <is-null-expr> ::= <math-expr> <is> [<not>]? <null>
+   * <is-nan-expr> ::= <math-expr> <is> [<not>]? <nan>
+   *
+   * <logical-factor> ::= <math-expr> | <in-expr> | <between-expr> | <like-expr> | <is-null-expr> | <is-nan-expr> | "(" <logical-expr> ")" [<as-alias>]?
+   * <unary-logical-expr> ::= [<unary-logical-opr>]* <logical-factor>
+   * <binary-logical-expr> ::= <unary-logical-expr> [<binary-logical-opr> <unary-logical-expr>]+
+   * <logical-expr> ::= <binary-logical-expr>
+   */
+  def inExpr: Parser[LogicalExpr] =
+    mathExpression ~ opt(NOT) ~ IN ~ LBR ~ repsep(mathExpression, COMMA) ~ RBR ^^ {
+      case head ~ notOpt ~ _ ~ _ ~ list ~ _ => InExpr(head, notOpt.isEmpty, list)
+    }
+  def betweenExpr: Parser[LogicalExpr] =
+    mathExpression ~ opt(NOT) ~ BETWEEN ~ LBR ~ repsep(mathExpression, COMMA) ~ RBR ^^ {
+      case head ~ notOpt ~ _ ~ _ ~ list ~ _ => BetweenExpr(head, notOpt.isEmpty, list)
+    } | mathExpression ~ opt(NOT) ~ BETWEEN ~ mathExpression ~ AND_ONLY ~ mathExpression ^^ {
+      case head ~ notOpt ~ _ ~ first ~ _ ~ second =>
+        BetweenExpr(head, notOpt.isEmpty, Seq(first, second))
+    }
   def likeExpr: Parser[LogicalExpr] = mathExpression ~ opt(NOT) ~ LIKE ~ mathExpression ^^ {
     case head ~ notOpt ~ _ ~ value => LikeExpr(head, notOpt.isEmpty, value)
   }
@@ -310,67 +321,71 @@
     case head ~ _ ~ notOpt ~ _ => IsNanExpr(head, notOpt.isEmpty)
   }
 
-  def logicalFactor: Parser[LogicalExpr] = (inExpr | betweenExpr | likeExpr | rlikeExpr | isNullExpr | isNanExpr | mathExpression) ^^ {
-    LogicalFactorExpr(_, false, None)
-  } | LBR ~ logicalExpression ~ RBR ~ opt(asAlias) ^^ {
-    case _ ~ expr ~ _ ~ aliasOpt => LogicalFactorExpr(expr, true, aliasOpt)
-  }
+  def logicalFactor: Parser[LogicalExpr] =
+    (inExpr | betweenExpr | likeExpr | rlikeExpr | isNullExpr | isNanExpr | mathExpression) ^^ {
+      LogicalFactorExpr(_, withBracket = false, None)
+    } | LBR ~ logicalExpression ~ RBR ~ opt(asAlias) ^^ {
+      case _ ~ expr ~ _ ~ aliasOpt => LogicalFactorExpr(expr, withBracket = true, aliasOpt)
+    }
   def unaryLogicalExpression: Parser[LogicalExpr] = rep(LOGICAL_UNARY) ~ logicalFactor ^^ {
     case Nil ~ a => a
     case list ~ a => UnaryLogicalExpr(list, a)
   }
   def binaryLogicalExpressions: Seq[Parser[LogicalExpr]] =
-    LOGICAL_BINARIES.foldLeft(List[Parser[LogicalExpr]](unaryLogicalExpression)) { (parsers, binaryParser) =>
-      val pre = parsers.headOption.orNull
-      val cur = pre ~ rep(binaryParser ~ pre) ^^ {
-        case a ~ Nil => a
-        case a ~ list => BinaryLogicalExpr(a, list.map(c => (c._1, c._2)))
-      }
-      cur :: parsers
+    LOGICAL_BINARIES.foldLeft(List[Parser[LogicalExpr]](unaryLogicalExpression)) {
+      (parsers, binaryParser) =>
+        val pre = parsers.headOption.orNull
+        val cur = pre ~ rep(binaryParser ~ pre) ^^ {
+          case a ~ Nil => a
+          case a ~ list => BinaryLogicalExpr(a, list.map(c => (c._1, c._2)))
+        }
+        cur :: parsers
     }
   def logicalExpression: Parser[LogicalExpr] = binaryLogicalExpressions.headOption.orNull
 
   /**
-    * -- expression --
-    * <expr> = <math-expr> | <logical-expr>
-    */
-
+   * -- expression --
+   * <expr> = <math-expr> | <logical-expr>
+   */
   def expression: Parser[Expr] = logicalExpression | mathExpression
 
   /**
-    * -- function expr --
-    * <function> ::= <function-name> "(" [<arg>] [, <arg>]+ ")" [<as-alias>]?
-    * <function-name> ::= ("function name registered")
-    * <arg> ::= <expr>
-    */
-
-  def function: Parser[FunctionExpr] = FunctionName ~ LBR ~ opt(DISTINCT) ~ repsep(argument, COMMA) ~ RBR ~ opt(asAlias) ^^ {
-    case name ~ _ ~ extraCdtnOpt ~ args ~ _ ~ aliasOpt =>
-      FunctionExpr(name, args, extraCdtnOpt.map(ExtraConditionExpr(_)), aliasOpt)
-  }
+   * -- function expr --
+   * <function> ::= <function-name> "(" [<arg>] [, <arg>]+ ")" [<as-alias>]?
+   * <function-name> ::= ("function name registered")
+   * <arg> ::= <expr>
+   */
+  def function: Parser[FunctionExpr] =
+    FunctionName ~ LBR ~ opt(DISTINCT) ~ repsep(argument, COMMA) ~ RBR ~ opt(asAlias) ^^ {
+      case name ~ _ ~ extraCdtnOpt ~ args ~ _ ~ aliasOpt =>
+        FunctionExpr(name, args, extraCdtnOpt.map(ExtraConditionExpr), aliasOpt)
+    }
   def argument: Parser[Expr] = expression
 
   /**
-    * -- clauses --
-    * <select-clause> = <expr> [, <expr>]*
-    * <where-clause> = <where> <expr>
-    * <from-clause> = <from> ("data source name registered")
-    * <having-clause> = <having> <expr>
-    * <groupby-clause> = <group> <by> <expr> [ <having-clause> ]?
-    * <orderby-item> = <expr> [ <DESC> ]?
-    * <orderby-clause> = <order> <by> <orderby-item> [ , <orderby-item> ]*
-    * <limit-clause> = <limit> <expr>
-    */
-
-  def selectClause: Parser[SelectClause] = opt(SELECT) ~> opt(DISTINCT) ~ rep1sep(expression, COMMA) ^^ {
-    case extraCdtnOpt ~ exprs => SelectClause(exprs, extraCdtnOpt.map(ExtraConditionExpr(_)))
+   * -- clauses --
+   * <select-clause> = <expr> [, <expr>]*
+   * <where-clause> = <where> <expr>
+   * <from-clause> = <from> ("data source name registered")
+   * <having-clause> = <having> <expr>
+   * <groupby-clause> = <group> <by> <expr> [ <having-clause> ]?
+   * <orderby-item> = <expr> [ <DESC> ]?
+   * <orderby-clause> = <order> <by> <orderby-item> [ , <orderby-item> ]*
+   * <limit-clause> = <limit> <expr>
+   */
+  def selectClause: Parser[SelectClause] =
+    opt(SELECT) ~> opt(DISTINCT) ~ rep1sep(expression, COMMA) ^^ {
+      case extraCdtnOpt ~ exprs => SelectClause(exprs, extraCdtnOpt.map(ExtraConditionExpr))
+    }
+  def fromClause: Parser[FromClause] = FROM ~> DataSourceName ^^ { ds =>
+    FromClause(trim(ds))
   }
-  def fromClause: Parser[FromClause] = FROM ~> DataSourceName ^^ { ds => FromClause(trim(ds)) }
-  def whereClause: Parser[WhereClause] = WHERE ~> expression ^^ { WhereClause(_) }
+  def whereClause: Parser[WhereClause] = WHERE ~> expression ^^ { WhereClause }
   def havingClause: Parser[Expr] = HAVING ~> expression
-  def groupbyClause: Parser[GroupbyClause] = GROUP ~ BY ~ rep1sep(expression, COMMA) ~ opt(havingClause) ^^ {
-    case _ ~ _ ~ cols ~ havingOpt => GroupbyClause(cols, havingOpt)
-  }
+  def groupbyClause: Parser[GroupbyClause] =
+    GROUP ~ BY ~ rep1sep(expression, COMMA) ~ opt(havingClause) ^^ {
+      case _ ~ _ ~ cols ~ havingOpt => GroupbyClause(cols, havingOpt)
+    }
   def orderItem: Parser[OrderItem] = expression ~ opt(DESC | ASC) ^^ {
     case expr ~ orderOpt => OrderItem(expr, orderOpt)
   }
@@ -380,18 +395,18 @@
   def sortbyClause: Parser[SortbyClause] = SORT ~ BY ~ rep1sep(orderItem, COMMA) ^^ {
     case _ ~ _ ~ cols => SortbyClause(cols)
   }
-  def limitClause: Parser[LimitClause] = LIMIT ~> expression ^^ { LimitClause(_) }
+  def limitClause: Parser[LimitClause] = LIMIT ~> expression ^^ { LimitClause }
 
   /**
-    * -- combined clauses --
-    * <combined-clauses> = <select-clause> [ <from-clause> ]+ [ <where-clause> ]+ [ <groupby-clause> ]+ [ <orderby-clause> ]+ [ <limit-clause> ]+
-    */
-
-  def combinedClause: Parser[CombinedClause] = selectClause ~ opt(fromClause) ~ opt(whereClause) ~
-    opt(groupbyClause) ~ opt(orderbyClause) ~ opt(limitClause) ^^ {
-    case sel ~ fromOpt ~ whereOpt ~ groupbyOpt ~ orderbyOpt ~ limitOpt =>
-      val tails = Seq(whereOpt, groupbyOpt, orderbyOpt, limitOpt).flatMap(opt => opt)
-      CombinedClause(sel, fromOpt, tails)
-  }
+   * -- combined clauses --
+   * <combined-clauses> = <select-clause> [ <from-clause> ]+ [ <where-clause> ]+ [ <groupby-clause> ]+ [ <orderby-clause> ]+ [ <limit-clause> ]+
+   */
+  def combinedClause: Parser[CombinedClause] =
+    selectClause ~ opt(fromClause) ~ opt(whereClause) ~
+      opt(groupbyClause) ~ opt(orderbyClause) ~ opt(limitClause) ^^ {
+      case sel ~ fromOpt ~ whereOpt ~ groupbyOpt ~ orderbyOpt ~ limitOpt =>
+        val tails = Seq(whereOpt, groupbyOpt, orderbyOpt, limitOpt).flatten
+        CombinedClause(sel, fromOpt, tails)
+    }
   // scalastyle:on
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/parser/GriffinDslParser.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/parser/GriffinDslParser.scala
index baec077..5c3c2de 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/parser/GriffinDslParser.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/parser/GriffinDslParser.scala
@@ -21,10 +21,10 @@
 import org.apache.griffin.measure.step.builder.dsl.expr._
 
 /**
-  * parser for griffin dsl rule
-  */
-case class GriffinDslParser(dataSourceNames: Seq[String], functionNames: Seq[String]
-                           ) extends BasicParser {
+ * parser for griffin dsl rule
+ */
+case class GriffinDslParser(dataSourceNames: Seq[String], functionNames: Seq[String])
+    extends BasicParser {
 
   import Operator._
 
@@ -33,51 +33,48 @@
    * <profiling-clauses> = <select-clause> [ <from-clause> ]+ [ <where-clause> ]+
    *  [ <groupby-clause> ]+ [ <orderby-clause> ]+ [ <limit-clause> ]+
    */
-  def profilingClause: Parser[ProfilingClause] = selectClause ~ opt(fromClause) ~ opt(whereClause) ~
-    opt(groupbyClause) ~ opt(orderbyClause) ~ opt(limitClause) ^^ {
-    case sel ~ fromOpt ~ whereOpt ~ groupbyOpt ~ orderbyOpt ~ limitOpt =>
-      val preClauses = Seq(whereOpt).flatMap(opt => opt)
-      val postClauses = Seq(orderbyOpt, limitOpt).flatMap(opt => opt)
-      ProfilingClause(sel, fromOpt, groupbyOpt, preClauses, postClauses)
-  }
+  def profilingClause: Parser[ProfilingClause] =
+    selectClause ~ opt(fromClause) ~ opt(whereClause) ~
+      opt(groupbyClause) ~ opt(orderbyClause) ~ opt(limitClause) ^^ {
+      case sel ~ fromOpt ~ whereOpt ~ groupbyOpt ~ orderbyOpt ~ limitOpt =>
+        val preClauses = Seq(whereOpt).flatten
+        val postClauses = Seq(orderbyOpt, limitOpt).flatten
+        ProfilingClause(sel, fromOpt, groupbyOpt, preClauses, postClauses)
+    }
 
   /**
-    * -- uniqueness clauses --
-    * <uniqueness-clauses> = <expr> [, <expr>]+
-    */
-  def uniquenessClause: Parser[UniquenessClause] = rep1sep(expression, Operator.COMMA) ^^ {
-    case exprs => UniquenessClause(exprs)
-  }
+   * -- uniqueness clauses --
+   * <uniqueness-clauses> = <expr> [, <expr>]+
+   */
+  def uniquenessClause: Parser[UniquenessClause] =
+    rep1sep(expression, Operator.COMMA) ^^ (exprs => UniquenessClause(exprs))
 
   /**
-    * -- distinctness clauses --
-    * <sqbr-expr> = "[" <expr> "]"
-    * <dist-expr> = <sqbr-expr> | <expr>
-    * <distinctness-clauses> = <distExpr> [, <distExpr>]+
-    */
-  def sqbrExpr: Parser[Expr] = LSQBR ~> expression <~ RSQBR ^^ {
-    case expr => expr.tag = "[]"; expr
+   * -- distinctness clauses --
+   * <sqbr-expr> = "[" <expr> "]"
+   * <dist-expr> = <sqbr-expr> | <expr>
+   * <distinctness-clauses> = <distExpr> [, <distExpr>]+
+   */
+  def sqbrExpr: Parser[Expr] = LSQBR ~> expression <~ RSQBR ^^ { expr =>
+    expr.tag = "[]"; expr
   }
   def distExpr: Parser[Expr] = expression | sqbrExpr
-  def distinctnessClause: Parser[DistinctnessClause] = rep1sep(distExpr, Operator.COMMA) ^^ {
-    case exprs => DistinctnessClause(exprs)
-  }
+  def distinctnessClause: Parser[DistinctnessClause] =
+    rep1sep(distExpr, Operator.COMMA) ^^ (exprs => DistinctnessClause(exprs))
 
   /**
-    * -- timeliness clauses --
-    * <timeliness-clauses> = <expr> [, <expr>]+
-    */
-  def timelinessClause: Parser[TimelinessClause] = rep1sep(expression, Operator.COMMA) ^^ {
-    case exprs => TimelinessClause(exprs)
-  }
+   * -- timeliness clauses --
+   * <timeliness-clauses> = <expr> [, <expr>]+
+   */
+  def timelinessClause: Parser[TimelinessClause] =
+    rep1sep(expression, Operator.COMMA) ^^ (exprs => TimelinessClause(exprs))
 
   /**
-    * -- completeness clauses --
-    * <completeness-clauses> = <expr> [, <expr>]+
-    */
-  def completenessClause: Parser[CompletenessClause] = rep1sep(expression, Operator.COMMA) ^^ {
-    case exprs => CompletenessClause(exprs)
-  }
+   * -- completeness clauses --
+   * <completeness-clauses> = <expr> [, <expr>]+
+   */
+  def completenessClause: Parser[CompletenessClause] =
+    rep1sep(expression, Operator.COMMA) ^^ (exprs => CompletenessClause(exprs))
 
   def parseRule(rule: String, dqType: DqType): ParseResult[Expr] = {
     val rootExpr = dqType match {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/AccuracyExpr2DQSteps.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/AccuracyExpr2DQSteps.scala
index c9e3c1c..21fd24c 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/AccuracyExpr2DQSteps.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/AccuracyExpr2DQSteps.scala
@@ -26,7 +26,11 @@
 import org.apache.griffin.measure.step.builder.ConstantColumns
 import org.apache.griffin.measure.step.builder.dsl.expr._
 import org.apache.griffin.measure.step.builder.dsl.transform.analyzer.AccuracyAnalyzer
-import org.apache.griffin.measure.step.transform.{DataFrameOps, DataFrameOpsTransformStep, SparkSqlTransformStep}
+import org.apache.griffin.measure.step.transform.{
+  DataFrameOps,
+  DataFrameOpsTransformStep,
+  SparkSqlTransformStep
+}
 import org.apache.griffin.measure.step.transform.DataFrameOps.AccuracyOprKeys
 import org.apache.griffin.measure.step.write.{
   DataSourceUpdateWriteStep,
@@ -36,11 +40,9 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * generate accuracy dq steps
-  */
-case class AccuracyExpr2DQSteps(context: DQContext,
-                                expr: Expr,
-                                ruleParam: RuleParam)
+ * generate accuracy dq steps
+ */
+case class AccuracyExpr2DQSteps(context: DQContext, expr: Expr, ruleParam: RuleParam)
     extends Expr2DQSteps {
 
   private object AccuracyKeys {
@@ -53,7 +55,7 @@
   }
   import AccuracyKeys._
 
-  def getDQSteps(): Seq[DQStep] = {
+  def getDQSteps: Seq[DQStep] = {
     val details = ruleParam.getDetails
     val accuracyExpr = expr.asInstanceOf[LogicalExpr]
 
@@ -65,16 +67,16 @@
     val timestamp = context.contextId.timestamp
 
     if (!context.runTimeTableRegister.existsTable(sourceName)) {
-      warn(s"[${timestamp}] data source ${sourceName} not exists")
+      warn(s"[$timestamp] data source $sourceName not exists")
       Nil
     } else {
       // 1. miss record
       val missRecordsTableName = "__missRecords"
-      val selClause = s"`${sourceName}`.*"
+      val selClause = s"`$sourceName`.*"
       val missRecordsSql =
         if (!context.runTimeTableRegister.existsTable(targetName)) {
-          warn(s"[${timestamp}] data source ${targetName} not exists")
-          s"SELECT ${selClause} FROM `${sourceName}`"
+          warn(s"[$timestamp] data source $targetName not exists")
+          s"SELECT $selClause FROM `$sourceName`"
         } else {
           val onClause = expr.coalesceDesc
           val sourceIsNull = analyzer.sourceSelectionExprs
@@ -87,9 +89,9 @@
               s"${sel.desc} IS NULL"
             }
             .mkString(" AND ")
-          val whereClause = s"(NOT (${sourceIsNull})) AND (${targetIsNull})"
-          s"SELECT ${selClause} FROM `${sourceName}` " +
-            s"LEFT JOIN `${targetName}` ON ${onClause} WHERE ${whereClause}"
+          val whereClause = s"(NOT ($sourceIsNull)) AND ($targetIsNull)"
+          s"SELECT $selClause FROM `$sourceName` " +
+            s"LEFT JOIN `$targetName` ON $onClause WHERE $whereClause"
         }
 
       val missRecordsWriteSteps = procType match {
@@ -115,18 +117,17 @@
           missRecordsSql,
           emptyMap,
           Some(missRecordsWriteSteps),
-          true
-        )
+          cache = true)
 
       // 2. miss count
       val missCountTableName = "__missCount"
       val missColName = details.getStringOrKey(_miss)
       val missCountSql = procType match {
         case BatchProcessType =>
-          s"SELECT COUNT(*) AS `${missColName}` FROM `${missRecordsTableName}`"
+          s"SELECT COUNT(*) AS `$missColName` FROM `$missRecordsTableName`"
         case StreamingProcessType =>
-          s"SELECT `${ConstantColumns.tmst}`,COUNT(*) AS `${missColName}` " +
-            s"FROM `${missRecordsTableName}` GROUP BY `${ConstantColumns.tmst}`"
+          s"SELECT `${ConstantColumns.tmst}`,COUNT(*) AS `$missColName` " +
+            s"FROM `$missRecordsTableName` GROUP BY `${ConstantColumns.tmst}`"
       }
       val missCountTransStep =
         SparkSqlTransformStep(missCountTableName, missCountSql, emptyMap)
@@ -137,10 +138,10 @@
       val totalColName = details.getStringOrKey(_total)
       val totalCountSql = procType match {
         case BatchProcessType =>
-          s"SELECT COUNT(*) AS `${totalColName}` FROM `${sourceName}`"
+          s"SELECT COUNT(*) AS `$totalColName` FROM `$sourceName`"
         case StreamingProcessType =>
-          s"SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `${totalColName}` " +
-            s"FROM `${sourceName}` GROUP BY `${ConstantColumns.tmst}`"
+          s"SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `$totalColName` " +
+            s"FROM `$sourceName` GROUP BY `${ConstantColumns.tmst}`"
       }
       val totalCountTransStep =
         SparkSqlTransformStep(totalCountTableName, totalCountSql, emptyMap)
@@ -152,27 +153,27 @@
       val accuracyMetricSql = procType match {
         case BatchProcessType =>
           s"""
-             SELECT A.total AS `${totalColName}`,
-                    A.miss AS `${missColName}`,
-                    (A.total - A.miss) AS `${matchedColName}`,
-                    coalesce( (A.total - A.miss) / A.total, 1.0) AS `${matchedFractionColName}`
+             SELECT A.total AS `$totalColName`,
+                    A.miss AS `$missColName`,
+                    (A.total - A.miss) AS `$matchedColName`,
+                    coalesce( (A.total - A.miss) / A.total, 1.0) AS `$matchedFractionColName`
              FROM (
-               SELECT `${totalCountTableName}`.`${totalColName}` AS total,
-                      coalesce(`${missCountTableName}`.`${missColName}`, 0) AS miss
-               FROM `${totalCountTableName}` LEFT JOIN `${missCountTableName}`
+               SELECT `$totalCountTableName`.`$totalColName` AS total,
+                      coalesce(`$missCountTableName`.`$missColName`, 0) AS miss
+               FROM `$totalCountTableName` LEFT JOIN `$missCountTableName`
              ) AS A
          """
         case StreamingProcessType =>
           // scalastyle:off
           s"""
-             |SELECT `${totalCountTableName}`.`${ConstantColumns.tmst}` AS `${ConstantColumns.tmst}`,
-             |`${totalCountTableName}`.`${totalColName}` AS `${totalColName}`,
-             |coalesce(`${missCountTableName}`.`${missColName}`, 0) AS `${missColName}`,
-             |(`${totalCountTableName}`.`${totalColName}` - coalesce(`${missCountTableName}`.`${missColName}`, 0)) AS `${matchedColName}`
-             |FROM `${totalCountTableName}` LEFT JOIN `${missCountTableName}`
-             |ON `${totalCountTableName}`.`${ConstantColumns.tmst}` = `${missCountTableName}`.`${ConstantColumns.tmst}`
+             |SELECT `$totalCountTableName`.`${ConstantColumns.tmst}` AS `${ConstantColumns.tmst}`,
+             |`$totalCountTableName`.`$totalColName` AS `$totalColName`,
+             |coalesce(`$missCountTableName`.`$missColName`, 0) AS `$missColName`,
+             |(`$totalCountTableName`.`$totalColName` - coalesce(`$missCountTableName`.`$missColName`, 0)) AS `$matchedColName`
+             |FROM `$totalCountTableName` LEFT JOIN `$missCountTableName`
+             |ON `$totalCountTableName`.`${ConstantColumns.tmst}` = `$missCountTableName`.`${ConstantColumns.tmst}`
          """.stripMargin
-         // scalastyle:on
+        // scalastyle:on
       }
 
       val accuracyMetricWriteStep = procType match {
@@ -192,8 +193,7 @@
           accuracyTableName,
           accuracyMetricSql,
           emptyMap,
-          accuracyMetricWriteStep
-        )
+          accuracyMetricWriteStep)
       accuracyTransStep.parentSteps += missCountTransStep
       accuracyTransStep.parentSteps += totalCountTransStep
 
@@ -207,8 +207,7 @@
           val accuracyMetricDetails: Map[String, Any] = Map(
             (AccuracyOprKeys._miss, missColName),
             (AccuracyOprKeys._total, totalColName),
-            (AccuracyOprKeys._matched, matchedColName)
-          )
+            (AccuracyOprKeys._matched, matchedColName))
           val accuracyMetricWriteStep = {
             val metricOpt = ruleParam.getOutputOpt(MetricOutputType)
             val mwName = metricOpt
@@ -224,8 +223,7 @@
             accuracyTableName,
             accuracyMetricRule,
             accuracyMetricDetails,
-            Some(accuracyMetricWriteStep)
-          )
+            Some(accuracyMetricWriteStep))
           accuracyMetricTransStep.parentSteps += accuracyTransStep
 
           // 6. collect accuracy records
@@ -233,7 +231,7 @@
           val accuracyRecordSql = {
             s"""
                |SELECT `${ConstantColumns.tmst}`, `${ConstantColumns.empty}`
-               |FROM `${accuracyMetricTableName}` WHERE `${ConstantColumns.record}`
+               |FROM `$accuracyMetricTableName` WHERE `${ConstantColumns.record}`
              """.stripMargin
           }
 
@@ -244,18 +242,13 @@
                 .flatMap(_.getNameOpt)
                 .getOrElse(missRecordsTableName)
 
-            RecordWriteStep(
-              rwName,
-              missRecordsTableName,
-              Some(accuracyRecordTableName)
-            )
+            RecordWriteStep(rwName, missRecordsTableName, Some(accuracyRecordTableName))
           }
           val accuracyRecordTransStep = SparkSqlTransformStep(
             accuracyRecordTableName,
             accuracyRecordSql,
             emptyMap,
-            Some(accuracyRecordWriteStep)
-          )
+            Some(accuracyRecordWriteStep))
           accuracyRecordTransStep.parentSteps += accuracyMetricTransStep
 
           accuracyRecordTransStep :: Nil
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/CompletenessExpr2DQSteps.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/CompletenessExpr2DQSteps.scala
index 360c8e9..56323e4 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/CompletenessExpr2DQSteps.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/CompletenessExpr2DQSteps.scala
@@ -33,12 +33,10 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * generate completeness dq steps
-  */
-case class CompletenessExpr2DQSteps(context: DQContext,
-                                    expr: Expr,
-                                    ruleParam: RuleParam
-                                   ) extends Expr2DQSteps {
+ * generate completeness dq steps
+ */
+case class CompletenessExpr2DQSteps(context: DQContext, expr: Expr, ruleParam: RuleParam)
+    extends Expr2DQSteps {
 
   private object CompletenessKeys {
     val _source = "source"
@@ -48,7 +46,7 @@
   }
   import CompletenessKeys._
 
-  def getDQSteps(): Seq[DQStep] = {
+  def getDQSteps: Seq[DQStep] = {
     val details = ruleParam.getDetails
     val completenessExpr = expr.asInstanceOf[CompletenessClause]
 
@@ -58,71 +56,74 @@
     val timestamp = context.contextId.timestamp
 
     if (!context.runTimeTableRegister.existsTable(sourceName)) {
-      warn(s"[${timestamp}] data source ${sourceName} not exists")
+      warn(s"[$timestamp] data source $sourceName not exists")
       Nil
     } else {
       val analyzer = CompletenessAnalyzer(completenessExpr, sourceName)
 
-      val selItemsClause = analyzer.selectionPairs.map { pair =>
-        val (expr, alias) = pair
-        s"${expr.desc} AS `${alias}`"
-      }.mkString(", ")
+      val selItemsClause = analyzer.selectionPairs
+        .map { pair =>
+          val (expr, alias) = pair
+          s"${expr.desc} AS `$alias`"
+        }
+        .mkString(", ")
       val aliases = analyzer.selectionPairs.map(_._2)
 
       val selClause = procType match {
         case BatchProcessType => selItemsClause
-        case StreamingProcessType => s"`${ConstantColumns.tmst}`, ${selItemsClause}"
-      }
-      val selAliases = procType match {
-        case BatchProcessType => aliases
-        case StreamingProcessType => ConstantColumns.tmst +: aliases
+        case StreamingProcessType => s"`${ConstantColumns.tmst}`, $selItemsClause"
       }
 
       // 1. source alias
       val sourceAliasTableName = "__sourceAlias"
       val sourceAliasSql = {
-        s"SELECT ${selClause} FROM `${sourceName}`"
+        s"SELECT $selClause FROM `$sourceName`"
       }
       val sourceAliasTransStep =
-        SparkSqlTransformStep(sourceAliasTableName, sourceAliasSql, emptyMap, None, true)
+        SparkSqlTransformStep(sourceAliasTableName, sourceAliasSql, emptyMap, None, cache = true)
 
       // 2. incomplete record
       val incompleteRecordsTableName = "__incompleteRecords"
       val errorConfs: Seq[RuleErrorConfParam] = ruleParam.getErrorConfs
       var incompleteWhereClause: String = ""
-      if (errorConfs.size == 0) {
+      if (errorConfs.isEmpty) {
         // without errorConfs
-        val completeWhereClause = aliases.map(a => s"`${a}` IS NOT NULL").mkString(" AND ")
-        incompleteWhereClause = s"NOT (${completeWhereClause})"
+        val completeWhereClause = aliases.map(a => s"`$a` IS NOT NULL").mkString(" AND ")
+        incompleteWhereClause = s"NOT ($completeWhereClause)"
       } else {
         // with errorConfs
         incompleteWhereClause = this.getErrorConfCompleteWhereClause(errorConfs)
       }
 
       val incompleteRecordsSql =
-        s"SELECT * FROM `${sourceAliasTableName}` WHERE ${incompleteWhereClause}"
+        s"SELECT * FROM `$sourceAliasTableName` WHERE $incompleteWhereClause"
 
       val incompleteRecordWriteStep = {
         val rwName =
-          ruleParam.getOutputOpt(RecordOutputType).flatMap(_.getNameOpt)
+          ruleParam
+            .getOutputOpt(RecordOutputType)
+            .flatMap(_.getNameOpt)
             .getOrElse(incompleteRecordsTableName)
         RecordWriteStep(rwName, incompleteRecordsTableName)
       }
       val incompleteRecordTransStep =
-        SparkSqlTransformStep(incompleteRecordsTableName, incompleteRecordsSql, emptyMap,
-          Some(incompleteRecordWriteStep), true)
+        SparkSqlTransformStep(
+          incompleteRecordsTableName,
+          incompleteRecordsSql,
+          emptyMap,
+          Some(incompleteRecordWriteStep),
+          cache = true)
       incompleteRecordTransStep.parentSteps += sourceAliasTransStep
 
-
       // 3. incomplete count
       val incompleteCountTableName = "__incompleteCount"
       val incompleteColName = details.getStringOrKey(_incomplete)
       val incompleteCountSql = procType match {
         case BatchProcessType =>
-          s"SELECT COUNT(*) AS `${incompleteColName}` FROM `${incompleteRecordsTableName}`"
+          s"SELECT COUNT(*) AS `$incompleteColName` FROM `$incompleteRecordsTableName`"
         case StreamingProcessType =>
-          s"SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `${incompleteColName}` " +
-            s"FROM `${incompleteRecordsTableName}` GROUP BY `${ConstantColumns.tmst}`"
+          s"SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `$incompleteColName` " +
+            s"FROM `$incompleteRecordsTableName` GROUP BY `${ConstantColumns.tmst}`"
       }
       val incompleteCountTransStep =
         SparkSqlTransformStep(incompleteCountTableName, incompleteCountSql, emptyMap)
@@ -133,12 +134,13 @@
       val totalColName = details.getStringOrKey(_total)
       val totalCountSql = procType match {
         case BatchProcessType =>
-          s"SELECT COUNT(*) AS `${totalColName}` FROM `${sourceAliasTableName}`"
+          s"SELECT COUNT(*) AS `$totalColName` FROM `$sourceAliasTableName`"
         case StreamingProcessType =>
-          s"SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `${totalColName}` " +
-            s"FROM `${sourceAliasTableName}` GROUP BY `${ConstantColumns.tmst}`"
+          s"SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `$totalColName` " +
+            s"FROM `$sourceAliasTableName` GROUP BY `${ConstantColumns.tmst}`"
       }
-      val totalCountTransStep = SparkSqlTransformStep(totalCountTableName, totalCountSql, emptyMap)
+      val totalCountTransStep =
+        SparkSqlTransformStep(totalCountTableName, totalCountSql, emptyMap)
       totalCountTransStep.parentSteps += sourceAliasTransStep
 
       // 5. complete metric
@@ -148,19 +150,19 @@
       val completeMetricSql = procType match {
         case BatchProcessType =>
           s"""
-             |SELECT `${totalCountTableName}`.`${totalColName}` AS `${totalColName}`,
-             |coalesce(`${incompleteCountTableName}`.`${incompleteColName}`, 0) AS `${incompleteColName}`,
-             |(`${totalCountTableName}`.`${totalColName}` - coalesce(`${incompleteCountTableName}`.`${incompleteColName}`, 0)) AS `${completeColName}`
-             |FROM `${totalCountTableName}` LEFT JOIN `${incompleteCountTableName}`
+             |SELECT `$totalCountTableName`.`$totalColName` AS `$totalColName`,
+             |coalesce(`$incompleteCountTableName`.`$incompleteColName`, 0) AS `$incompleteColName`,
+             |(`$totalCountTableName`.`$totalColName` - coalesce(`$incompleteCountTableName`.`$incompleteColName`, 0)) AS `$completeColName`
+             |FROM `$totalCountTableName` LEFT JOIN `$incompleteCountTableName`
          """.stripMargin
         case StreamingProcessType =>
           s"""
-             |SELECT `${totalCountTableName}`.`${ConstantColumns.tmst}` AS `${ConstantColumns.tmst}`,
-             |`${totalCountTableName}`.`${totalColName}` AS `${totalColName}`,
-             |coalesce(`${incompleteCountTableName}`.`${incompleteColName}`, 0) AS `${incompleteColName}`,
-             |(`${totalCountTableName}`.`${totalColName}` - coalesce(`${incompleteCountTableName}`.`${incompleteColName}`, 0)) AS `${completeColName}`
-             |FROM `${totalCountTableName}` LEFT JOIN `${incompleteCountTableName}`
-             |ON `${totalCountTableName}`.`${ConstantColumns.tmst}` = `${incompleteCountTableName}`.`${ConstantColumns.tmst}`
+             |SELECT `$totalCountTableName`.`${ConstantColumns.tmst}` AS `${ConstantColumns.tmst}`,
+             |`$totalCountTableName`.`$totalColName` AS `$totalColName`,
+             |coalesce(`$incompleteCountTableName`.`$incompleteColName`, 0) AS `$incompleteColName`,
+             |(`$totalCountTableName`.`$totalColName` - coalesce(`$incompleteCountTableName`.`$incompleteColName`, 0)) AS `$completeColName`
+             |FROM `$totalCountTableName` LEFT JOIN `$incompleteCountTableName`
+             |ON `$totalCountTableName`.`${ConstantColumns.tmst}` = `$incompleteCountTableName`.`${ConstantColumns.tmst}`
          """.stripMargin
       }
       // scalastyle:on
@@ -171,7 +173,11 @@
         MetricWriteStep(mwName, completeTableName, flattenType)
       }
       val completeTransStep =
-        SparkSqlTransformStep(completeTableName, completeMetricSql, emptyMap, Some(completeWriteStep))
+        SparkSqlTransformStep(
+          completeTableName,
+          completeMetricSql,
+          emptyMap,
+          Some(completeWriteStep))
       completeTransStep.parentSteps += incompleteCountTransStep
       completeTransStep.parentSteps += totalCountTransStep
 
@@ -181,46 +187,48 @@
   }
 
   /**
-    * get 'error' where clause
-    * @param errorConfs error configuraion list
-    * @return 'error' where clause
-    */
+   * get 'error' where clause
+   * @param errorConfs error configuraion list
+   * @return 'error' where clause
+   */
   def getErrorConfCompleteWhereClause(errorConfs: Seq[RuleErrorConfParam]): String = {
     errorConfs.map(errorConf => this.getEachErrorWhereClause(errorConf)).mkString(" OR ")
   }
 
   /**
-    * get error sql for each column
-    * @param errorConf  error configuration
-    * @return 'error' sql for each column
-    */
+   * get error sql for each column
+   * @param errorConf  error configuration
+   * @return 'error' sql for each column
+   */
   def getEachErrorWhereClause(errorConf: RuleErrorConfParam): String = {
     val errorType: Option[String] = errorConf.getErrorType
     val columnName: String = errorConf.getColumnName.get
     if ("regex".equalsIgnoreCase(errorType.get)) {
       // only have one regular expression
-      val regexValue: String = errorConf.getValues.apply(0)
+      val regexValue: String = errorConf.getValues.head
       val afterReplace: String = regexValue.replaceAll("""\\""", """\\\\""")
-      return s"(`${columnName}` REGEXP '${afterReplace}')"
+      return s"(`$columnName` REGEXP '$afterReplace')"
     } else if ("enumeration".equalsIgnoreCase(errorType.get)) {
       val values: Seq[String] = errorConf.getValues
       var inResult = ""
       var nullResult = ""
       if (values.contains("hive_none")) {
         // hive_none means NULL
-        nullResult = s"`${columnName}` IS NULL"
+        nullResult = s"`$columnName` IS NULL"
       }
 
-      val valueWithQuote: String = values.filter(value => !"hive_none".equals(value))
-        .map(value => s"'${value}'").mkString(", ")
+      val valueWithQuote: String = values
+        .filter(value => !"hive_none".equals(value))
+        .map(value => s"'$value'")
+        .mkString(", ")
 
       if (!StringUtils.isEmpty(valueWithQuote)) {
-        inResult = s"`${columnName}` IN (${valueWithQuote})"
+        inResult = s"`$columnName` IN ($valueWithQuote)"
       }
 
       var result = ""
       if (!StringUtils.isEmpty(inResult) && !StringUtils.isEmpty(nullResult)) {
-        result = s"(${inResult} OR ${nullResult})"
+        result = s"($inResult OR $nullResult)"
       } else if (!StringUtils.isEmpty(inResult)) {
         result = s"($inResult)"
       } else {
@@ -229,6 +237,7 @@
 
       return result
     }
-    throw new IllegalArgumentException("type in error.confs only supports regex and enumeration way")
+    throw new IllegalArgumentException(
+      "type in error.confs only supports regex and enumeration way")
   }
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/DistinctnessExpr2DQSteps.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/DistinctnessExpr2DQSteps.scala
index 6be5e17..98bef42 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/DistinctnessExpr2DQSteps.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/DistinctnessExpr2DQSteps.scala
@@ -18,7 +18,10 @@
 package org.apache.griffin.measure.step.builder.dsl.transform
 
 import org.apache.griffin.measure.configuration.dqdefinition.RuleParam
-import org.apache.griffin.measure.configuration.enums.FlattenType.{ArrayFlattenType, EntriesFlattenType}
+import org.apache.griffin.measure.configuration.enums.FlattenType.{
+  ArrayFlattenType,
+  EntriesFlattenType
+}
 import org.apache.griffin.measure.configuration.enums.OutputType._
 import org.apache.griffin.measure.configuration.enums.ProcessType._
 import org.apache.griffin.measure.context.DQContext
@@ -27,16 +30,18 @@
 import org.apache.griffin.measure.step.builder.dsl.expr.{DistinctnessClause, _}
 import org.apache.griffin.measure.step.builder.dsl.transform.analyzer.DistinctnessAnalyzer
 import org.apache.griffin.measure.step.transform.SparkSqlTransformStep
-import org.apache.griffin.measure.step.write.{DataSourceUpdateWriteStep, MetricWriteStep, RecordWriteStep}
+import org.apache.griffin.measure.step.write.{
+  DataSourceUpdateWriteStep,
+  MetricWriteStep,
+  RecordWriteStep
+}
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * generate distinctness dq steps
-  */
-case class DistinctnessExpr2DQSteps(context: DQContext,
-                                    expr: Expr,
-                                    ruleParam: RuleParam
-                                   ) extends Expr2DQSteps {
+ * generate distinctness dq steps
+ */
+case class DistinctnessExpr2DQSteps(context: DQContext, expr: Expr, ruleParam: RuleParam)
+    extends Expr2DQSteps {
 
   private object DistinctnessKeys {
     val _source = "source"
@@ -54,7 +59,7 @@
   }
   import DistinctnessKeys._
 
-  def getDQSteps(): Seq[DQStep] = {
+  def getDQSteps: Seq[DQStep] = {
     val details = ruleParam.getDetails
     val distinctnessExpr = expr.asInstanceOf[DistinctnessClause]
 
@@ -68,48 +73,50 @@
 
     val beginTmst = dsTimeRanges.get(sourceName).map(_.begin) match {
       case Some(t) => t
-      case _ => throw new Exception(s"empty begin tmst from ${sourceName}")
+      case _ => throw new Exception(s"empty begin tmst from $sourceName")
     }
     val endTmst = dsTimeRanges.get(sourceName).map(_.end) match {
       case Some(t) => t
-      case _ => throw new Exception(s"empty end tmst from ${sourceName}")
+      case _ => throw new Exception(s"empty end tmst from $sourceName")
     }
 
     val writeTimestampOpt = Some(endTmst)
 
     if (!context.runTimeTableRegister.existsTable(sourceName)) {
-      warn(s"[${timestamp}] data source ${sourceName} not exists")
+      warn(s"[$timestamp] data source $sourceName not exists")
       Nil
     } else {
       val withOlderTable = {
-        details.getBoolean(_withAccumulate, true) &&
-          context.runTimeTableRegister.existsTable(targetName)
+        details.getBoolean(_withAccumulate, defValue = true) &&
+        context.runTimeTableRegister.existsTable(targetName)
       }
 
-      val selClause = analyzer.selectionPairs.map { pair =>
-        val (expr, alias, _) = pair
-        s"${expr.desc} AS `${alias}`"
-      }.mkString(", ")
+      val selClause = analyzer.selectionPairs
+        .map { pair =>
+          val (expr, alias, _) = pair
+          s"${expr.desc} AS `$alias`"
+        }
+        .mkString(", ")
       val distAliases = analyzer.selectionPairs.filter(_._3).map(_._2)
-      val distAliasesClause = distAliases.map( a => s"`${a}`" ).mkString(", ")
+      val distAliasesClause = distAliases.map(a => s"`$a`").mkString(", ")
       val allAliases = analyzer.selectionPairs.map(_._2)
-      val allAliasesClause = allAliases.map( a => s"`${a}`" ).mkString(", ")
+      val allAliasesClause = allAliases.map(a => s"`$a`").mkString(", ")
       val groupAliases = analyzer.selectionPairs.filter(!_._3).map(_._2)
-      val groupAliasesClause = groupAliases.map( a => s"`${a}`" ).mkString(", ")
+      val groupAliasesClause = groupAliases.map(a => s"`$a`").mkString(", ")
 
       // 1. source alias
       val sourceAliasTableName = "__sourceAlias"
       val sourceAliasSql = {
-        s"SELECT ${selClause} FROM `${sourceName}`"
+        s"SELECT $selClause FROM `$sourceName`"
       }
       val sourceAliasTransStep =
-        SparkSqlTransformStep(sourceAliasTableName, sourceAliasSql, emptyMap, None, true)
+        SparkSqlTransformStep(sourceAliasTableName, sourceAliasSql, emptyMap, None, cache = true)
 
       // 2. total metric
       val totalTableName = "__totalMetric"
       val totalColName = details.getStringOrKey(_total)
       val totalSql = {
-        s"SELECT COUNT(*) AS `${totalColName}` FROM `${sourceAliasTableName}`"
+        s"SELECT COUNT(*) AS `$totalColName` FROM `$sourceAliasTableName`"
       }
       val totalMetricWriteStep = {
         MetricWriteStep(totalColName, totalTableName, EntriesFlattenType, writeTimestampOpt)
@@ -124,45 +131,52 @@
       val accuDupColName = details.getStringOrKey(_accu_dup)
       val selfGroupSql = {
         s"""
-           |SELECT ${distAliasesClause}, (COUNT(*) - 1) AS `${dupColName}`,
+           |SELECT $distAliasesClause, (COUNT(*) - 1) AS `$dupColName`,
            |TRUE AS `${ConstantColumns.distinct}`
-           |FROM `${sourceAliasTableName}` GROUP BY ${distAliasesClause}
+           |FROM `$sourceAliasTableName` GROUP BY $distAliasesClause
           """.stripMargin
       }
       val selfGroupTransStep =
-        SparkSqlTransformStep(selfGroupTableName, selfGroupSql, emptyMap, None, true)
+        SparkSqlTransformStep(selfGroupTableName, selfGroupSql, emptyMap, None, cache = true)
       selfGroupTransStep.parentSteps += sourceAliasTransStep
 
       val transSteps1 = totalTransStep :: selfGroupTransStep :: Nil
 
       val (transSteps2, dupCountTableName) = procType match {
-        case StreamingProcessType if (withOlderTable) =>
+        case StreamingProcessType if withOlderTable =>
           // 4.0 update old data
           val targetDsUpdateWriteStep = DataSourceUpdateWriteStep(targetName, targetName)
 
           // 4. older alias
           val olderAliasTableName = "__older"
           val olderAliasSql = {
-            s"SELECT ${selClause} FROM `${targetName}` WHERE `${ConstantColumns.tmst}` <= ${beginTmst}"
+            s"SELECT $selClause FROM `$targetName` WHERE `${ConstantColumns.tmst}` <= $beginTmst"
           }
-          val olderAliasTransStep = SparkSqlTransformStep(olderAliasTableName, olderAliasSql, emptyMap)
+          val olderAliasTransStep =
+            SparkSqlTransformStep(olderAliasTableName, olderAliasSql, emptyMap)
 
           // 5. join with older data
           val joinedTableName = "__joined"
-          val selfSelClause = (distAliases :+ dupColName).map { alias =>
-            s"`${selfGroupTableName}`.`${alias}`"
-          }.mkString(", ")
-          val onClause = distAliases.map { alias =>
-            s"coalesce(`${selfGroupTableName}`.`${alias}`, '') = coalesce(`${olderAliasTableName}`.`${alias}`, '')"
-          }.mkString(" AND ")
-          val olderIsNull = distAliases.map { alias =>
-            s"`${olderAliasTableName}`.`${alias}` IS NULL"
-          }.mkString(" AND ")
+          val selfSelClause = (distAliases :+ dupColName)
+            .map { alias =>
+              s"`$selfGroupTableName`.`$alias`"
+            }
+            .mkString(", ")
+          val onClause = distAliases
+            .map { alias =>
+              s"coalesce(`$selfGroupTableName`.`$alias`, '') = coalesce(`$olderAliasTableName`.`$alias`, '')"
+            }
+            .mkString(" AND ")
+          val olderIsNull = distAliases
+            .map { alias =>
+              s"`$olderAliasTableName`.`$alias` IS NULL"
+            }
+            .mkString(" AND ")
           val joinedSql = {
             s"""
-               |SELECT ${selfSelClause}, (${olderIsNull}) AS `${ConstantColumns.distinct}`
-               |FROM `${olderAliasTableName}` RIGHT JOIN `${selfGroupTableName}`
-               |ON ${onClause}
+               |SELECT $selfSelClause, ($olderIsNull) AS `${ConstantColumns.distinct}`
+               |FROM `$olderAliasTableName` RIGHT JOIN `$selfGroupTableName`
+               |ON $onClause
             """.stripMargin
           }
           val joinedTransStep = SparkSqlTransformStep(joinedTableName, joinedSql, emptyMap)
@@ -174,10 +188,10 @@
           val moreDupColName = "_more_dup"
           val groupSql = {
             s"""
-               |SELECT ${distAliasesClause}, `${dupColName}`, `${ConstantColumns.distinct}`,
-               |COUNT(*) AS `${moreDupColName}`
-               |FROM `${joinedTableName}`
-               |GROUP BY ${distAliasesClause}, `${dupColName}`, `${ConstantColumns.distinct}`
+               |SELECT $distAliasesClause, `$dupColName`, `${ConstantColumns.distinct}`,
+               |COUNT(*) AS `$moreDupColName`
+               |FROM `$joinedTableName`
+               |GROUP BY $distAliasesClause, `$dupColName`, `${ConstantColumns.distinct}`
              """.stripMargin
           }
           val groupTransStep = SparkSqlTransformStep(groupTableName, groupSql, emptyMap)
@@ -185,32 +199,38 @@
 
           // 7. final duplicate count
           val finalDupCountTableName = "__finalDupCount"
+
           /**
-            * dupColName:      the duplicate count of duplicated items only occurs in new data,
-            *                  which means the distinct one in new data is also duplicate
-            * accuDupColName:  the count of duplicated items accumulated in new data and old data,
-            *                  which means the accumulated distinct count in all data
-            * e.g.:  new data [A, A, B, B, C, D], old data [A, A, B, C]
-            *        selfGroupTable will be (A, 1, F), (B, 1, F), (C, 0, T), (D, 0, T)
-            *        joinedTable will be (A, 1, F), (A, 1, F), (B, 1, F), (C, 0, F), (D, 0, T)
-            *        groupTable will be (A, 1, F, 2), (B, 1, F, 1), (C, 0, F, 1), (D, 0, T, 1)
-            *        finalDupCountTable will be (A, F, 2, 3), (B, F, 2, 2), (C, F, 1, 1), (D, T, 0, 0)
-            *        The distinct result of new data only should be: (A, 2), (B, 2), (C, 1), (D, 0),
-            *        which means in new data [A, A, B, B, C, D], [A, A, B, B, C] are all duplicated,
+           * dupColName:      the duplicate count of duplicated items only occurs in new data,
+           *                  which means the distinct one in new data is also duplicate
+           * accuDupColName:  the count of duplicated items accumulated in new data and old data,
+           *                  which means the accumulated distinct count in all data
+           * e.g.:  new data [A, A, B, B, C, D], old data [A, A, B, C]
+           *        selfGroupTable will be (A, 1, F), (B, 1, F), (C, 0, T), (D, 0, T)
+           *        joinedTable will be (A, 1, F), (A, 1, F), (B, 1, F), (C, 0, F), (D, 0, T)
+           *        groupTable will be (A, 1, F, 2), (B, 1, F, 1), (C, 0, F, 1), (D, 0, T, 1)
+           *        finalDupCountTable will be (A, F, 2, 3), (B, F, 2, 2), (C, F, 1, 1), (D, T, 0, 0)
+           *        The distinct result of new data only should be: (A, 2), (B, 2), (C, 1), (D, 0),
+           *        which means in new data [A, A, B, B, C, D], [A, A, B, B, C] are all duplicated,
            *         only [D] is distinct
-            */
+           */
           val finalDupCountSql = {
             s"""
-               |SELECT ${distAliasesClause}, `${ConstantColumns.distinct}`,
-               |CASE WHEN `${ConstantColumns.distinct}` THEN `${dupColName}`
-               |ELSE (`${dupColName}` + 1) END AS `${dupColName}`,
-               |CASE WHEN `${ConstantColumns.distinct}` THEN `${dupColName}`
-               |ELSE (`${dupColName}` + `${moreDupColName}`) END AS `${accuDupColName}`
-               |FROM `${groupTableName}`
+               |SELECT $distAliasesClause, `${ConstantColumns.distinct}`,
+               |CASE WHEN `${ConstantColumns.distinct}` THEN `$dupColName`
+               |ELSE (`$dupColName` + 1) END AS `$dupColName`,
+               |CASE WHEN `${ConstantColumns.distinct}` THEN `$dupColName`
+               |ELSE (`$dupColName` + `$moreDupColName`) END AS `$accuDupColName`
+               |FROM `$groupTableName`
              """.stripMargin
           }
           val finalDupCountTransStep =
-            SparkSqlTransformStep(finalDupCountTableName, finalDupCountSql, emptyMap, None, true)
+            SparkSqlTransformStep(
+              finalDupCountTableName,
+              finalDupCountSql,
+              emptyMap,
+              None,
+              cache = true)
           finalDupCountTransStep.parentSteps += groupTransStep
 
           (finalDupCountTransStep :: targetDsUpdateWriteStep :: Nil, finalDupCountTableName)
@@ -223,8 +243,8 @@
       val distColName = details.getStringOrKey(_distinct)
       val distSql = {
         s"""
-           |SELECT COUNT(*) AS `${distColName}`
-           |FROM `${dupCountTableName}` WHERE `${ConstantColumns.distinct}`
+           |SELECT COUNT(*) AS `$distColName`
+           |FROM `$dupCountTableName` WHERE `${ConstantColumns.distinct}`
          """.stripMargin
       }
       val distMetricWriteStep = {
@@ -237,21 +257,23 @@
 
       val duplicationArrayName = details.getString(_duplicationArray, "")
       val transSteps4 = if (duplicationArrayName.nonEmpty) {
-        val recordEnable = details.getBoolean(_recordEnable, false)
-        if (groupAliases.size > 0) {
+        val recordEnable = details.getBoolean(_recordEnable, defValue = false)
+        if (groupAliases.nonEmpty) {
           // with some group by requirement
           // 9. origin data join with distinct information
           val informedTableName = "__informed"
-          val onClause = distAliases.map { alias =>
-            s"coalesce(`${sourceAliasTableName}`.`${alias}`, '') = coalesce(`${dupCountTableName}`.`${alias}`, '')"
-          }.mkString(" AND ")
+          val onClause = distAliases
+            .map { alias =>
+              s"coalesce(`$sourceAliasTableName`.`$alias`, '') = coalesce(`$dupCountTableName`.`$alias`, '')"
+            }
+            .mkString(" AND ")
           val informedSql = {
             s"""
-               |SELECT `${sourceAliasTableName}`.*,
-               |`${dupCountTableName}`.`${dupColName}` AS `${dupColName}`,
-               |`${dupCountTableName}`.`${ConstantColumns.distinct}` AS `${ConstantColumns.distinct}`
-               |FROM `${sourceAliasTableName}` LEFT JOIN `${dupCountTableName}`
-               |ON ${onClause}
+               |SELECT `$sourceAliasTableName`.*,
+               |`$dupCountTableName`.`$dupColName` AS `$dupColName`,
+               |`$dupCountTableName`.`${ConstantColumns.distinct}` AS `${ConstantColumns.distinct}`
+               |FROM `$sourceAliasTableName` LEFT JOIN `$dupCountTableName`
+               |ON $onClause
                """.stripMargin
           }
           val informedTransStep = SparkSqlTransformStep(informedTableName, informedSql, emptyMap)
@@ -263,8 +285,8 @@
           val rnSql = {
             s"""
                |SELECT *,
-               |ROW_NUMBER() OVER (DISTRIBUTE BY ${rnDistClause} ${rnSortClause}) `${ConstantColumns.rowNumber}`
-               |FROM `${informedTableName}`
+               |ROW_NUMBER() OVER (DISTRIBUTE BY $rnDistClause $rnSortClause) `${ConstantColumns.rowNumber}`
+               |FROM `$informedTableName`
                """.stripMargin
           }
           val rnTransStep = SparkSqlTransformStep(rnTableName, rnSql, emptyMap)
@@ -274,12 +296,15 @@
           val dupItemsTableName = "__dupItems"
           val dupItemsSql = {
             s"""
-               |SELECT ${allAliasesClause}, `${dupColName}` FROM `${rnTableName}`
+               |SELECT $allAliasesClause, `$dupColName` FROM `$rnTableName`
                |WHERE NOT `${ConstantColumns.distinct}` OR `${ConstantColumns.rowNumber}` > 1
                """.stripMargin
           }
           val dupItemsWriteStep = {
-            val rwName = ruleParam.getOutputOpt(RecordOutputType).flatMap(_.getNameOpt).getOrElse(dupItemsTableName)
+            val rwName = ruleParam
+              .getOutputOpt(RecordOutputType)
+              .flatMap(_.getNameOpt)
+              .getOrElse(dupItemsTableName)
             RecordWriteStep(rwName, dupItemsTableName, None, writeTimestampOpt)
           }
           val dupItemsTransStep = {
@@ -288,8 +313,7 @@
                 dupItemsTableName,
                 dupItemsSql,
                 emptyMap,
-                Some(dupItemsWriteStep)
-              )
+                Some(dupItemsWriteStep))
             } else {
               SparkSqlTransformStep(dupItemsTableName, dupItemsSql, emptyMap)
             }
@@ -302,12 +326,13 @@
           val groupSelClause = groupAliasesClause
           val groupDupMetricSql = {
             s"""
-               |SELECT ${groupSelClause}, `${dupColName}`, COUNT(*) AS `${numColName}`
-               |FROM `${dupItemsTableName}` GROUP BY ${groupSelClause}, `${dupColName}`
+               |SELECT $groupSelClause, `$dupColName`, COUNT(*) AS `$numColName`
+               |FROM `$dupItemsTableName` GROUP BY $groupSelClause, `$dupColName`
              """.stripMargin
           }
           val groupDupMetricWriteStep = {
-            MetricWriteStep(duplicationArrayName,
+            MetricWriteStep(
+              duplicationArrayName,
               groupDupMetricTableName,
               ArrayFlattenType,
               writeTimestampOpt)
@@ -317,8 +342,7 @@
               groupDupMetricTableName,
               groupDupMetricSql,
               emptyMap,
-              Some(groupDupMetricWriteStep)
-            )
+              Some(groupDupMetricWriteStep))
           groupDupMetricTransStep.parentSteps += dupItemsTransStep
 
           groupDupMetricTransStep :: Nil
@@ -327,20 +351,22 @@
           // 9. duplicate record
           val dupRecordTableName = "__dupRecords"
           val dupRecordSelClause = procType match {
-            case StreamingProcessType if (withOlderTable) =>
-              s"${distAliasesClause}, `${dupColName}`, `${accuDupColName}`"
+            case StreamingProcessType if withOlderTable =>
+              s"$distAliasesClause, `$dupColName`, `$accuDupColName`"
 
-            case _ => s"${distAliasesClause}, `${dupColName}`"
+            case _ => s"$distAliasesClause, `$dupColName`"
           }
           val dupRecordSql = {
             s"""
-               |SELECT ${dupRecordSelClause}
-               |FROM `${dupCountTableName}` WHERE `${dupColName}` > 0
+               |SELECT $dupRecordSelClause
+               |FROM `$dupCountTableName` WHERE `$dupColName` > 0
               """.stripMargin
           }
           val dupRecordWriteStep = {
             val rwName =
-              ruleParam.getOutputOpt(RecordOutputType).flatMap(_.getNameOpt)
+              ruleParam
+                .getOutputOpt(RecordOutputType)
+                .flatMap(_.getNameOpt)
                 .getOrElse(dupRecordTableName)
             RecordWriteStep(rwName, dupRecordTableName, None, writeTimestampOpt)
           }
@@ -351,10 +377,14 @@
                 dupRecordSql,
                 emptyMap,
                 Some(dupRecordWriteStep),
-                true
-              )
+                cache = true)
             } else {
-              SparkSqlTransformStep(dupRecordTableName, dupRecordSql, emptyMap, None, true)
+              SparkSqlTransformStep(
+                dupRecordTableName,
+                dupRecordSql,
+                emptyMap,
+                None,
+                cache = true)
             }
           }
 
@@ -363,8 +393,8 @@
           val numColName = details.getStringOrKey(_num)
           val dupMetricSql = {
             s"""
-               |SELECT `${dupColName}`, COUNT(*) AS `${numColName}`
-               |FROM `${dupRecordTableName}` GROUP BY `${dupColName}`
+               |SELECT `$dupColName`, COUNT(*) AS `$numColName`
+               |FROM `$dupRecordTableName` GROUP BY `$dupColName`
               """.stripMargin
           }
           val dupMetricWriteStep = {
@@ -372,16 +402,14 @@
               duplicationArrayName,
               dupMetricTableName,
               ArrayFlattenType,
-              writeTimestampOpt
-            )
+              writeTimestampOpt)
           }
           val dupMetricTransStep =
             SparkSqlTransformStep(
               dupMetricTableName,
               dupMetricSql,
               emptyMap,
-              Some(dupMetricWriteStep)
-            )
+              Some(dupMetricWriteStep))
           dupMetricTransStep.parentSteps += dupRecordTransStep
 
           dupMetricTransStep :: Nil
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/Expr2DQSteps.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/Expr2DQSteps.scala
index 8955a8a..92dc817 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/Expr2DQSteps.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/Expr2DQSteps.scala
@@ -27,24 +27,21 @@
 
 trait Expr2DQSteps extends Loggable with Serializable {
 
-  protected val emtptDQSteps = Seq[DQStep]()
-  protected val emptyMap = Map[String, Any]()
+  protected val emtptDQSteps: Seq[DQStep] = Seq[DQStep]()
+  protected val emptyMap: Map[String, Any] = Map[String, Any]()
 
-  def getDQSteps(): Seq[DQStep]
+  def getDQSteps: Seq[DQStep]
 }
 
 /**
-  * get dq steps generator for griffin dsl rule
-  */
+ * get dq steps generator for griffin dsl rule
+ */
 object Expr2DQSteps {
-  private val emtptExpr2DQSteps = new Expr2DQSteps {
-    def getDQSteps(): Seq[DQStep] = emtptDQSteps
+  private val emtptExpr2DQSteps: Expr2DQSteps = new Expr2DQSteps {
+    def getDQSteps: Seq[DQStep] = emtptDQSteps
   }
 
-  def apply(context: DQContext,
-            expr: Expr,
-            ruleParam: RuleParam
-           ): Expr2DQSteps = {
+  def apply(context: DQContext, expr: Expr, ruleParam: RuleParam): Expr2DQSteps = {
     ruleParam.getDqType match {
       case Accuracy => AccuracyExpr2DQSteps(context, expr, ruleParam)
       case Profiling => ProfilingExpr2DQSteps(context, expr, ruleParam)
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/ProfilingExpr2DQSteps.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/ProfilingExpr2DQSteps.scala
index bc111fa..a127037 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/ProfilingExpr2DQSteps.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/ProfilingExpr2DQSteps.scala
@@ -33,19 +33,17 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * generate profiling dq steps
-  */
-case class ProfilingExpr2DQSteps(context: DQContext,
-                                 expr: Expr,
-                                 ruleParam: RuleParam
-                                ) extends Expr2DQSteps {
+ * generate profiling dq steps
+ */
+case class ProfilingExpr2DQSteps(context: DQContext, expr: Expr, ruleParam: RuleParam)
+    extends Expr2DQSteps {
 
   private object ProfilingKeys {
     val _source = "source"
   }
   import ProfilingKeys._
 
-  def getDQSteps(): Seq[DQStep] = {
+  def getDQSteps: Seq[DQStep] = {
     val details = ruleParam.getDetails
     val profilingExpr = expr.asInstanceOf[ProfilingClause]
 
@@ -59,18 +57,18 @@
     val timestamp = context.contextId.timestamp
 
     if (!context.runTimeTableRegister.existsTable(sourceName)) {
-      warn(s"[${timestamp}] data source ${sourceName} not exists")
+      warn(s"[$timestamp] data source $sourceName not exists")
       Nil
     } else {
       val analyzer = ProfilingAnalyzer(profilingExpr, sourceName)
       val selExprDescs = analyzer.selectionExprs.map { sel =>
         val alias = sel match {
           case s: AliasableExpr =>
-            s.alias.filter(StringUtils.isNotEmpty).map(a => s" AS `${a}`").getOrElse("")
+            s.alias.filter(StringUtils.isNotEmpty).map(a => s" AS `$a`").getOrElse("")
 
           case _ => ""
         }
-        s"${sel.desc}${alias}"
+        s"${sel.desc}$alias"
       }
       val selCondition = profilingExpr.selectClause.extraConditionOpt.map(_.desc).mkString
       val selClause = procType match {
@@ -94,8 +92,8 @@
 
       // 1. select statement
       val profilingSql = {
-        s"SELECT ${selCondition} ${selClause} " +
-          s"${fromClause} ${preGroupbyClause} ${groupbyClause} ${postGroupbyClause}"
+        s"SELECT $selCondition $selClause " +
+          s"$fromClause $preGroupbyClause $groupbyClause $postGroupbyClause"
       }
       val profilingName = ruleParam.getOutDfName()
       val profilingMetricWriteStep = {
@@ -105,7 +103,11 @@
         MetricWriteStep(mwName, profilingName, flattenType)
       }
       val profilingTransStep =
-        SparkSqlTransformStep(profilingName, profilingSql, details, Some(profilingMetricWriteStep))
+        SparkSqlTransformStep(
+          profilingName,
+          profilingSql,
+          details,
+          Some(profilingMetricWriteStep))
       profilingTransStep :: Nil
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/TimelinessExpr2DQSteps.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/TimelinessExpr2DQSteps.scala
index aea5dca..60fa5d5 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/TimelinessExpr2DQSteps.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/TimelinessExpr2DQSteps.scala
@@ -18,7 +18,10 @@
 package org.apache.griffin.measure.step.builder.dsl.transform
 
 import org.apache.griffin.measure.configuration.dqdefinition.RuleParam
-import org.apache.griffin.measure.configuration.enums.FlattenType.{ArrayFlattenType, DefaultFlattenType}
+import org.apache.griffin.measure.configuration.enums.FlattenType.{
+  ArrayFlattenType,
+  DefaultFlattenType
+}
 import org.apache.griffin.measure.configuration.enums.OutputType._
 import org.apache.griffin.measure.configuration.enums.ProcessType._
 import org.apache.griffin.measure.context.DQContext
@@ -32,12 +35,10 @@
 import org.apache.griffin.measure.utils.TimeUtil
 
 /**
-  * generate timeliness dq steps
-  */
-case class TimelinessExpr2DQSteps(context: DQContext,
-                                  expr: Expr,
-                                  ruleParam: RuleParam
-                                 ) extends Expr2DQSteps {
+ * generate timeliness dq steps
+ */
+case class TimelinessExpr2DQSteps(context: DQContext, expr: Expr, ruleParam: RuleParam)
+    extends Expr2DQSteps {
 
   private object TimelinessKeys {
     val _source = "source"
@@ -53,7 +54,7 @@
   }
   import TimelinessKeys._
 
-  def getDQSteps(): Seq[DQStep] = {
+  def getDQSteps: Seq[DQStep] = {
     val details = ruleParam.getDetails
     val timelinessExpr = expr.asInstanceOf[TimelinessClause]
 
@@ -61,16 +62,9 @@
 
     val procType = context.procType
     val timestamp = context.contextId.timestamp
-    val dsTimeRanges = context.dataSourceTimeRanges
-
-    val minTmstOpt = dsTimeRanges.get(sourceName).flatMap(_.minTmstOpt)
-    val minTmst = minTmstOpt match {
-      case Some(t) => t
-      case _ => throw new Exception(s"empty min tmst from ${sourceName}")
-    }
 
     if (!context.runTimeTableRegister.existsTable(sourceName)) {
-      warn(s"[${timestamp}] data source ${sourceName} not exists")
+      warn(s"[$timestamp] data source $sourceName not exists")
       Nil
     } else {
       val analyzer = TimelinessAnalyzer(timelinessExpr, sourceName)
@@ -82,14 +76,14 @@
       val inTimeSql = etsSelOpt match {
         case Some(etsSel) =>
           s"""
-             |SELECT *, (${btsSel}) AS `${ConstantColumns.beginTs}`,
-             |(${etsSel}) AS `${ConstantColumns.endTs}`
-             |FROM ${sourceName} WHERE (${btsSel}) IS NOT NULL AND (${etsSel}) IS NOT NULL
+             |SELECT *, ($btsSel) AS `${ConstantColumns.beginTs}`,
+             |($etsSel) AS `${ConstantColumns.endTs}`
+             |FROM $sourceName WHERE ($btsSel) IS NOT NULL AND ($etsSel) IS NOT NULL
            """.stripMargin
         case _ =>
           s"""
-             |SELECT *, (${btsSel}) AS `${ConstantColumns.beginTs}`
-             |FROM ${sourceName} WHERE (${btsSel}) IS NOT NULL
+             |SELECT *, ($btsSel) AS `${ConstantColumns.beginTs}`
+             |FROM $sourceName WHERE ($btsSel) IS NOT NULL
            """.stripMargin
       }
       val inTimeTransStep = SparkSqlTransformStep(inTimeTableName, inTimeSql, emptyMap)
@@ -102,10 +96,11 @@
         case _ => ConstantColumns.tmst
       }
       val latencySql = {
-        s"SELECT *, (`${etsColName}` - `${ConstantColumns.beginTs}`) AS `${latencyColName}` " +
-          s"FROM `${inTimeTableName}`"
+        s"SELECT *, (`$etsColName` - `${ConstantColumns.beginTs}`) AS `$latencyColName` " +
+          s"FROM `$inTimeTableName`"
       }
-      val latencyTransStep = SparkSqlTransformStep(latencyTableName, latencySql, emptyMap, None, true)
+      val latencyTransStep =
+        SparkSqlTransformStep(latencyTableName, latencySql, emptyMap, None, cache = true)
       latencyTransStep.parentSteps += inTimeTransStep
 
       // 3. timeliness metric
@@ -116,17 +111,17 @@
 
         case BatchProcessType =>
           s"""
-             |SELECT COUNT(*) AS `${totalColName}`,
-             |CAST(AVG(`${latencyColName}`) AS BIGINT) AS `${avgColName}`
-             |FROM `${latencyTableName}`
+             |SELECT COUNT(*) AS `$totalColName`,
+             |CAST(AVG(`$latencyColName`) AS BIGINT) AS `$avgColName`
+             |FROM `$latencyTableName`
            """.stripMargin
 
         case StreamingProcessType =>
           s"""
              |SELECT `${ConstantColumns.tmst}`,
-             |COUNT(*) AS `${totalColName}`,
-             |CAST(AVG(`${latencyColName}`) AS BIGINT) AS `${avgColName}`
-             |FROM `${latencyTableName}`
+             |COUNT(*) AS `$totalColName`,
+             |CAST(AVG(`$latencyColName`) AS BIGINT) AS `$avgColName`
+             |FROM `$latencyTableName`
              |GROUP BY `${ConstantColumns.tmst}`
            """.stripMargin
       }
@@ -148,11 +143,13 @@
         case Some(tsh) =>
           val recordTableName = "__lateRecords"
           val recordSql = {
-            s"SELECT * FROM `${latencyTableName}` WHERE `${latencyColName}` > ${tsh}"
+            s"SELECT * FROM `$latencyTableName` WHERE `$latencyColName` > $tsh"
           }
           val recordWriteStep = {
             val rwName =
-              ruleParam.getOutputOpt(RecordOutputType).flatMap(_.getNameOpt)
+              ruleParam
+                .getOutputOpt(RecordOutputType)
+                .flatMap(_.getNameOpt)
                 .getOrElse(recordTableName)
 
             RecordWriteStep(rwName, recordTableName, None)
@@ -173,8 +170,8 @@
           val stepColName = details.getStringOrKey(_step)
           val rangeSql = {
             s"""
-               |SELECT *, CAST((`${latencyColName}` / ${stepSize}) AS BIGINT) AS `${stepColName}`
-               |FROM `${latencyTableName}`
+               |SELECT *, CAST((`$latencyColName` / $stepSize) AS BIGINT) AS `$stepColName`
+               |FROM `$latencyTableName`
              """.stripMargin
           }
           val rangeTransStep = SparkSqlTransformStep(rangeTableName, rangeSql, emptyMap)
@@ -186,20 +183,24 @@
           val rangeMetricSql = procType match {
             case BatchProcessType =>
               s"""
-                 |SELECT `${stepColName}`, COUNT(*) AS `${countColName}`
-                 |FROM `${rangeTableName}` GROUP BY `${stepColName}`
+                 |SELECT `$stepColName`, COUNT(*) AS `$countColName`
+                 |FROM `$rangeTableName` GROUP BY `$stepColName`
                 """.stripMargin
             case StreamingProcessType =>
               s"""
-                 |SELECT `${ConstantColumns.tmst}`, `${stepColName}`, COUNT(*) AS `${countColName}`
-                 |FROM `${rangeTableName}` GROUP BY `${ConstantColumns.tmst}`, `${stepColName}`
+                 |SELECT `${ConstantColumns.tmst}`, `$stepColName`, COUNT(*) AS `$countColName`
+                 |FROM `$rangeTableName` GROUP BY `${ConstantColumns.tmst}`, `$stepColName`
                 """.stripMargin
           }
           val rangeMetricWriteStep = {
             MetricWriteStep(stepColName, rangeMetricTableName, ArrayFlattenType)
           }
           val rangeMetricTransStep =
-            SparkSqlTransformStep(rangeMetricTableName, rangeMetricSql, emptyMap, Some(rangeMetricWriteStep))
+            SparkSqlTransformStep(
+              rangeMetricTableName,
+              rangeMetricSql,
+              emptyMap,
+              Some(rangeMetricWriteStep))
           rangeMetricTransStep.parentSteps += rangeTransStep
 
           rangeMetricTransStep :: Nil
@@ -208,25 +209,31 @@
 
       // 6. percentiles
       val percentiles = getPercentiles(details)
-      val transSteps4 = if (percentiles.size > 0) {
+      val transSteps4 = if (percentiles.nonEmpty) {
         val percentileTableName = "__percentile"
         val percentileColName = details.getStringOrKey(_percentileColPrefix)
-        val percentileCols = percentiles.map { pct =>
-          val pctName = (pct * 100).toInt.toString
-          s"floor(percentile_approx(${latencyColName}, ${pct})) " +
-            s"AS `${percentileColName}_${pctName}`"
-        }.mkString(", ")
+        val percentileCols = percentiles
+          .map { pct =>
+            val pctName = (pct * 100).toInt.toString
+            s"floor(percentile_approx($latencyColName, $pct)) " +
+              s"AS `${percentileColName}_$pctName`"
+          }
+          .mkString(", ")
         val percentileSql = {
           s"""
-             |SELECT ${percentileCols}
-             |FROM `${latencyTableName}`
+             |SELECT $percentileCols
+             |FROM `$latencyTableName`
             """.stripMargin
         }
         val percentileWriteStep = {
           MetricWriteStep(percentileTableName, percentileTableName, DefaultFlattenType)
         }
         val percentileTransStep =
-          SparkSqlTransformStep(percentileTableName, percentileSql, emptyMap, Some(percentileWriteStep))
+          SparkSqlTransformStep(
+            percentileTableName,
+            percentileSql,
+            emptyMap,
+            Some(percentileWriteStep))
         percentileTransStep.parentSteps += latencyTransStep
 
         percentileTransStep :: Nil
@@ -238,7 +245,7 @@
   }
 
   private def getPercentiles(details: Map[String, Any]): Seq[Double] = {
-    details.getDoubleArr(_percentileValues).filter(d => (d >= 0 && d <= 1))
+    details.getDoubleArr(_percentileValues).filter(d => d >= 0 && d <= 1)
   }
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/UniquenessExpr2DQSteps.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/UniquenessExpr2DQSteps.scala
index 534641b..77c3013 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/UniquenessExpr2DQSteps.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/UniquenessExpr2DQSteps.scala
@@ -18,7 +18,10 @@
 package org.apache.griffin.measure.step.builder.dsl.transform
 
 import org.apache.griffin.measure.configuration.dqdefinition.RuleParam
-import org.apache.griffin.measure.configuration.enums.FlattenType.{ArrayFlattenType, EntriesFlattenType}
+import org.apache.griffin.measure.configuration.enums.FlattenType.{
+  ArrayFlattenType,
+  EntriesFlattenType
+}
 import org.apache.griffin.measure.configuration.enums.OutputType._
 import org.apache.griffin.measure.configuration.enums.ProcessType._
 import org.apache.griffin.measure.context.DQContext
@@ -31,12 +34,10 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * generate uniqueness dq steps
-  */
-case class UniquenessExpr2DQSteps(context: DQContext,
-                                  expr: Expr,
-                                  ruleParam: RuleParam
-                                 ) extends Expr2DQSteps {
+ * generate uniqueness dq steps
+ */
+case class UniquenessExpr2DQSteps(context: DQContext, expr: Expr, ruleParam: RuleParam)
+    extends Expr2DQSteps {
 
   private object UniquenessKeys {
     val _source = "source"
@@ -50,7 +51,7 @@
   }
   import UniquenessKeys._
 
-  def getDQSteps(): Seq[DQStep] = {
+  def getDQSteps: Seq[DQStep] = {
     val details = ruleParam.getDetails
     val uniquenessExpr = expr.asInstanceOf[UniquenessClause]
 
@@ -62,21 +63,23 @@
     val timestamp = context.contextId.timestamp
 
     if (!context.runTimeTableRegister.existsTable(sourceName)) {
-      warn(s"[${timestamp}] data source ${sourceName} not exists")
+      warn(s"[$timestamp] data source $sourceName not exists")
       Nil
     } else if (!context.runTimeTableRegister.existsTable(targetName)) {
-      warn(s"[${timestamp}] data source ${targetName} not exists")
+      warn(s"[$timestamp] data source $targetName not exists")
       Nil
     } else {
-      val selItemsClause = analyzer.selectionPairs.map { pair =>
-        val (expr, alias) = pair
-        s"${expr.desc} AS `${alias}`"
-      }.mkString(", ")
+      val selItemsClause = analyzer.selectionPairs
+        .map { pair =>
+          val (expr, alias) = pair
+          s"${expr.desc} AS `$alias`"
+        }
+        .mkString(", ")
       val aliases = analyzer.selectionPairs.map(_._2)
 
       val selClause = procType match {
         case BatchProcessType => selItemsClause
-        case StreamingProcessType => s"`${ConstantColumns.tmst}`, ${selItemsClause}"
+        case StreamingProcessType => s"`${ConstantColumns.tmst}`, $selItemsClause"
       }
       val selAliases = procType match {
         case BatchProcessType => aliases
@@ -85,24 +88,28 @@
 
       // 1. source distinct mapping
       val sourceTableName = "__source"
-      val sourceSql = s"SELECT DISTINCT ${selClause} FROM ${sourceName}"
+      val sourceSql = s"SELECT DISTINCT $selClause FROM $sourceName"
       val sourceTransStep = SparkSqlTransformStep(sourceTableName, sourceSql, emptyMap)
 
       // 2. target mapping
       val targetTableName = "__target"
-      val targetSql = s"SELECT ${selClause} FROM ${targetName}"
+      val targetSql = s"SELECT $selClause FROM $targetName"
       val targetTransStep = SparkSqlTransformStep(targetTableName, targetSql, emptyMap)
 
       // 3. joined
       val joinedTableName = "__joined"
-      val joinedSelClause = selAliases.map { alias =>
-        s"`${sourceTableName}`.`${alias}` AS `${alias}`"
-      }.mkString(", ")
-      val onClause = aliases.map { alias =>
-        s"coalesce(`${sourceTableName}`.`${alias}`, '') = coalesce(`${targetTableName}`.`${alias}`, '')"
-      }.mkString(" AND ")
+      val joinedSelClause = selAliases
+        .map { alias =>
+          s"`$sourceTableName`.`$alias` AS `$alias`"
+        }
+        .mkString(", ")
+      val onClause = aliases
+        .map { alias =>
+          s"coalesce(`$sourceTableName`.`$alias`, '') = coalesce(`$targetTableName`.`$alias`, '')"
+        }
+        .mkString(" AND ")
       val joinedSql = {
-        s"SELECT ${joinedSelClause} FROM `${targetTableName}` RIGHT JOIN `${sourceTableName}` ON ${onClause}"
+        s"SELECT $joinedSelClause FROM `$targetTableName` RIGHT JOIN `$sourceTableName` ON $onClause"
       }
       val joinedTransStep = SparkSqlTransformStep(joinedTableName, joinedSql, emptyMap)
       joinedTransStep.parentSteps += sourceTransStep
@@ -110,26 +117,29 @@
 
       // 4. group
       val groupTableName = "__group"
-      val groupSelClause = selAliases.map { alias =>
-        s"`${alias}`"
-      }.mkString(", ")
+      val groupSelClause = selAliases
+        .map { alias =>
+          s"`$alias`"
+        }
+        .mkString(", ")
       val dupColName = details.getStringOrKey(_dup)
       val groupSql = {
-        s"SELECT ${groupSelClause}, (COUNT(*) - 1) AS `${dupColName}` " +
-          s"FROM `${joinedTableName}` GROUP BY ${groupSelClause}"
+        s"SELECT $groupSelClause, (COUNT(*) - 1) AS `$dupColName` " +
+          s"FROM `$joinedTableName` GROUP BY $groupSelClause"
       }
-      val groupTransStep = SparkSqlTransformStep(groupTableName, groupSql, emptyMap, None, true)
+      val groupTransStep =
+        SparkSqlTransformStep(groupTableName, groupSql, emptyMap, None, cache = true)
       groupTransStep.parentSteps += joinedTransStep
 
       // 5. total metric
       val totalTableName = "__totalMetric"
       val totalColName = details.getStringOrKey(_total)
       val totalSql = procType match {
-        case BatchProcessType => s"SELECT COUNT(*) AS `${totalColName}` FROM `${sourceName}`"
+        case BatchProcessType => s"SELECT COUNT(*) AS `$totalColName` FROM `$sourceName`"
         case StreamingProcessType =>
           s"""
-             |SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `${totalColName}`
-             |FROM `${sourceName}` GROUP BY `${ConstantColumns.tmst}`
+             |SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `$totalColName`
+             |FROM `$sourceName` GROUP BY `${ConstantColumns.tmst}`
            """.stripMargin
       }
       val totalMetricWriteStep = MetricWriteStep(totalColName, totalTableName, EntriesFlattenType)
@@ -139,7 +149,7 @@
       // 6. unique record
       val uniqueRecordTableName = "__uniqueRecord"
       val uniqueRecordSql = {
-        s"SELECT * FROM `${groupTableName}` WHERE `${dupColName}` = 0"
+        s"SELECT * FROM `$groupTableName` WHERE `$dupColName` = 0"
       }
       val uniqueRecordTransStep =
         SparkSqlTransformStep(uniqueRecordTableName, uniqueRecordSql, emptyMap)
@@ -150,11 +160,11 @@
       val uniqueColName = details.getStringOrKey(_unique)
       val uniqueSql = procType match {
         case BatchProcessType =>
-          s"SELECT COUNT(*) AS `${uniqueColName}` FROM `${uniqueRecordTableName}`"
+          s"SELECT COUNT(*) AS `$uniqueColName` FROM `$uniqueRecordTableName`"
         case StreamingProcessType =>
           s"""
-             |SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `${uniqueColName}`
-             |FROM `${uniqueRecordTableName}` GROUP BY `${ConstantColumns.tmst}`
+             |SELECT `${ConstantColumns.tmst}`, COUNT(*) AS `$uniqueColName`
+             |FROM `$uniqueRecordTableName` GROUP BY `${ConstantColumns.tmst}`
            """.stripMargin
       }
       val uniqueMetricWriteStep =
@@ -170,47 +180,54 @@
         // 8. duplicate record
         val dupRecordTableName = "__dupRecords"
         val dupRecordSql = {
-          s"SELECT * FROM `${groupTableName}` WHERE `${dupColName}` > 0"
+          s"SELECT * FROM `$groupTableName` WHERE `$dupColName` > 0"
         }
 
         val dupRecordWriteStep = {
           val rwName =
-            ruleParam.getOutputOpt(RecordOutputType).flatMap(_.getNameOpt)
+            ruleParam
+              .getOutputOpt(RecordOutputType)
+              .flatMap(_.getNameOpt)
               .getOrElse(dupRecordTableName)
 
           RecordWriteStep(rwName, dupRecordTableName)
         }
         val dupRecordTransStep =
-          SparkSqlTransformStep(dupRecordTableName, dupRecordSql, emptyMap, Some(dupRecordWriteStep), true)
+          SparkSqlTransformStep(
+            dupRecordTableName,
+            dupRecordSql,
+            emptyMap,
+            Some(dupRecordWriteStep),
+            cache = true)
 
         // 9. duplicate metric
         val dupMetricTableName = "__dupMetric"
         val numColName = details.getStringOrKey(_num)
         val dupMetricSelClause = procType match {
-          case BatchProcessType => s"`${dupColName}`, COUNT(*) AS `${numColName}`"
+          case BatchProcessType => s"`$dupColName`, COUNT(*) AS `$numColName`"
 
           case StreamingProcessType =>
-            s"`${ConstantColumns.tmst}`, `${dupColName}`, COUNT(*) AS `${numColName}`"
+            s"`${ConstantColumns.tmst}`, `$dupColName`, COUNT(*) AS `$numColName`"
         }
         val dupMetricGroupbyClause = procType match {
-          case BatchProcessType => s"`${dupColName}`"
-          case StreamingProcessType => s"`${ConstantColumns.tmst}`, `${dupColName}`"
+          case BatchProcessType => s"`$dupColName`"
+          case StreamingProcessType => s"`${ConstantColumns.tmst}`, `$dupColName`"
         }
         val dupMetricSql = {
           s"""
-             |SELECT ${dupMetricSelClause} FROM `${dupRecordTableName}`
-             |GROUP BY ${dupMetricGroupbyClause}
+             |SELECT $dupMetricSelClause FROM `$dupRecordTableName`
+             |GROUP BY $dupMetricGroupbyClause
           """.stripMargin
         }
         val dupMetricWriteStep = {
           MetricWriteStep(duplicationArrayName, dupMetricTableName, ArrayFlattenType)
         }
         val dupMetricTransStep =
-          SparkSqlTransformStep(dupMetricTableName,
+          SparkSqlTransformStep(
+            dupMetricTableName,
             dupMetricSql,
             emptyMap,
-            Some(dupMetricWriteStep)
-          )
+            Some(dupMetricWriteStep))
         dupMetricTransStep.parentSteps += dupRecordTransStep
 
         dupMetricTransStep :: Nil
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/AccuracyAnalyzer.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/AccuracyAnalyzer.scala
index 484dd5d..bf77090 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/AccuracyAnalyzer.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/AccuracyAnalyzer.scala
@@ -19,23 +19,22 @@
 
 import org.apache.griffin.measure.step.builder.dsl.expr._
 
-
 case class AccuracyAnalyzer(expr: LogicalExpr, sourceName: String, targetName: String)
-  extends BasicAnalyzer {
+    extends BasicAnalyzer {
 
-  val dataSourceNames =
+  val dataSourceNames: Set[String] =
     expr.preOrderTraverseDepthFirst(Set[String]())(seqDataSourceNames, combDataSourceNames)
 
-  val sourceSelectionExprs = {
+  val sourceSelectionExprs: Seq[SelectionExpr] = {
     val seq = seqSelectionExprs(sourceName)
     expr.preOrderTraverseDepthFirst(Seq[SelectionExpr]())(seq, combSelectionExprs)
   }
-  val targetSelectionExprs = {
+  val targetSelectionExprs: Seq[SelectionExpr] = {
     val seq = seqSelectionExprs(targetName)
     expr.preOrderTraverseDepthFirst(Seq[SelectionExpr]())(seq, combSelectionExprs)
   }
 
-  val selectionExprs = sourceSelectionExprs ++ {
+  val selectionExprs: Seq[AliasableExpr] = sourceSelectionExprs ++ {
     expr.preOrderTraverseDepthFirst(Seq[AliasableExpr]())(seqWithAliasExprs, combWithAliasExprs)
   }
 
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/BasicAnalyzer.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/BasicAnalyzer.scala
index 73d5c8f..743bba9 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/BasicAnalyzer.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/BasicAnalyzer.scala
@@ -20,35 +20,42 @@
 import org.apache.griffin.measure.step.builder.dsl.expr._
 
 /**
-  * analyzer of expr, to help generate dq steps by expr
-  */
+ * analyzer of expr, to help generate dq steps by expr
+ */
 trait BasicAnalyzer extends Serializable {
 
   val expr: Expr
 
-  val seqDataSourceNames = (expr: Expr, v: Set[String]) => {
+  val seqDataSourceNames: (Expr, Set[String]) => Set[String] = (expr: Expr, v: Set[String]) => {
     expr match {
       case DataSourceHeadExpr(name) => v + name
       case _ => v
     }
   }
-  val combDataSourceNames = (a: Set[String], b: Set[String]) => a ++ b
+  val combDataSourceNames: (Set[String], Set[String]) => Set[String] =
+    (a: Set[String], b: Set[String]) => a ++ b
 
-  val seqSelectionExprs = (dsName: String) => (expr: Expr, v: Seq[SelectionExpr]) => {
-    expr match {
-      case se @ SelectionExpr(head: DataSourceHeadExpr, _, _) if (head.name == dsName) => v :+ se
-      case _ => v
+  val seqSelectionExprs: String => (Expr, Seq[SelectionExpr]) => Seq[SelectionExpr] =
+    (dsName: String) =>
+      (expr: Expr, v: Seq[SelectionExpr]) => {
+        expr match {
+          case se @ SelectionExpr(head: DataSourceHeadExpr, _, _) if head.name == dsName =>
+            v :+ se
+          case _ => v
+        }
     }
-  }
-  val combSelectionExprs = (a: Seq[SelectionExpr], b: Seq[SelectionExpr]) => a ++ b
+  val combSelectionExprs: (Seq[SelectionExpr], Seq[SelectionExpr]) => Seq[SelectionExpr] =
+    (a: Seq[SelectionExpr], b: Seq[SelectionExpr]) => a ++ b
 
-  val seqWithAliasExprs = (expr: Expr, v: Seq[AliasableExpr]) => {
-    expr match {
-      case se: SelectExpr => v
-      case a: AliasableExpr if (a.alias.nonEmpty) => v :+ a
-      case _ => v
+  val seqWithAliasExprs: (Expr, Seq[AliasableExpr]) => Seq[AliasableExpr] =
+    (expr: Expr, v: Seq[AliasableExpr]) => {
+      expr match {
+        case _: SelectExpr => v
+        case a: AliasableExpr if a.alias.nonEmpty => v :+ a
+        case _ => v
+      }
     }
-  }
-  val combWithAliasExprs = (a: Seq[AliasableExpr], b: Seq[AliasableExpr]) => a ++ b
+  val combWithAliasExprs: (Seq[AliasableExpr], Seq[AliasableExpr]) => Seq[AliasableExpr] =
+    (a: Seq[AliasableExpr], b: Seq[AliasableExpr]) => a ++ b
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/CompletenessAnalyzer.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/CompletenessAnalyzer.scala
index c01d976..db3a1a2 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/CompletenessAnalyzer.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/CompletenessAnalyzer.scala
@@ -19,21 +19,21 @@
 
 import org.apache.griffin.measure.step.builder.dsl.expr._
 
-
 case class CompletenessAnalyzer(expr: CompletenessClause, sourceName: String)
-  extends BasicAnalyzer {
+    extends BasicAnalyzer {
 
-  val seqAlias = (expr: Expr, v: Seq[String]) => {
+  val seqAlias: (Expr, Seq[String]) => Seq[String] = (expr: Expr, v: Seq[String]) => {
     expr match {
       case apr: AliasableExpr => v ++ apr.alias
       case _ => v
     }
   }
-  val combAlias = (a: Seq[String], b: Seq[String]) => a ++ b
+  val combAlias: (Seq[String], Seq[String]) => Seq[String] = (a: Seq[String], b: Seq[String]) =>
+    a ++ b
 
   private val exprs = expr.exprs
-  private def genAlias(idx: Int): String = s"alias_${idx}"
-  val selectionPairs = exprs.zipWithIndex.map { pair =>
+  private def genAlias(idx: Int): String = s"alias_$idx"
+  val selectionPairs: Seq[(Expr, String)] = exprs.zipWithIndex.map { pair =>
     val (pr, idx) = pair
     val res = pr.preOrderTraverseDepthFirst(Seq[String]())(seqAlias, combAlias)
     (pr, res.headOption.getOrElse(genAlias(idx)))
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/DistinctnessAnalyzer.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/DistinctnessAnalyzer.scala
index 80ced8a..e4940c2 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/DistinctnessAnalyzer.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/DistinctnessAnalyzer.scala
@@ -19,21 +19,21 @@
 
 import org.apache.griffin.measure.step.builder.dsl.expr._
 
-
 case class DistinctnessAnalyzer(expr: DistinctnessClause, sourceName: String)
-  extends BasicAnalyzer {
+    extends BasicAnalyzer {
 
-  val seqAlias = (expr: Expr, v: Seq[String]) => {
+  val seqAlias: (Expr, Seq[String]) => Seq[String] = (expr: Expr, v: Seq[String]) => {
     expr match {
       case apr: AliasableExpr => v ++ apr.alias
       case _ => v
     }
   }
-  val combAlias = (a: Seq[String], b: Seq[String]) => a ++ b
+  val combAlias: (Seq[String], Seq[String]) => Seq[String] = (a: Seq[String], b: Seq[String]) =>
+    a ++ b
 
   private val exprs = expr.exprs
-  private def genAlias(idx: Int): String = s"alias_${idx}"
-  val selectionPairs = exprs.zipWithIndex.map { pair =>
+  private def genAlias(idx: Int): String = s"alias_$idx"
+  val selectionPairs: Seq[(Expr, String, Boolean)] = exprs.zipWithIndex.map { pair =>
     val (pr, idx) = pair
     val res = pr.preOrderTraverseDepthFirst(Seq[String]())(seqAlias, combAlias)
     (pr, res.headOption.getOrElse(genAlias(idx)), pr.tag.isEmpty)
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/ProfilingAnalyzer.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/ProfilingAnalyzer.scala
index ba3e00c..63bae2e 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/ProfilingAnalyzer.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/ProfilingAnalyzer.scala
@@ -19,10 +19,9 @@
 
 import org.apache.griffin.measure.step.builder.dsl.expr._
 
-
 case class ProfilingAnalyzer(expr: ProfilingClause, sourceName: String) extends BasicAnalyzer {
 
-  val dataSourceNames =
+  val dataSourceNames: Set[String] =
     expr.preOrderTraverseDepthFirst(Set[String]())(seqDataSourceNames, combDataSourceNames)
 
   val selectionExprs: Seq[Expr] = {
@@ -35,8 +34,8 @@
     }
   }
 
-  val groupbyExprOpt = expr.groupbyClauseOpt
-  val preGroupbyExprs = expr.preGroupbyClauses.map(_.extractSelf)
-  val postGroupbyExprs = expr.postGroupbyClauses.map(_.extractSelf)
+  val groupbyExprOpt: Option[GroupbyClause] = expr.groupbyClauseOpt
+  val preGroupbyExprs: Seq[Expr] = expr.preGroupbyClauses.map(_.extractSelf)
+  val postGroupbyExprs: Seq[Expr] = expr.postGroupbyClauses.map(_.extractSelf)
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/TimelinessAnalyzer.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/TimelinessAnalyzer.scala
index 29afaac..d0b65c7 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/TimelinessAnalyzer.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/TimelinessAnalyzer.scala
@@ -19,7 +19,6 @@
 
 import org.apache.griffin.measure.step.builder.dsl.expr._
 
-
 case class TimelinessAnalyzer(expr: TimelinessClause, sourceName: String) extends BasicAnalyzer {
 
 //  val tsExpr = expr.desc
@@ -52,7 +51,6 @@
 //    (pr.desc, alias)
 //  }
 
-
   private val exprs = expr.exprs.map(_.desc).toList
 
   val (btsExpr, etsExprOpt) = exprs match {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/UniquenessAnalyzer.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/UniquenessAnalyzer.scala
index 98e1912..4b36fb2 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/UniquenessAnalyzer.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/dsl/transform/analyzer/UniquenessAnalyzer.scala
@@ -19,21 +19,21 @@
 
 import org.apache.griffin.measure.step.builder.dsl.expr.{AliasableExpr, _}
 
-
 case class UniquenessAnalyzer(expr: UniquenessClause, sourceName: String, targetName: String)
-  extends BasicAnalyzer {
+    extends BasicAnalyzer {
 
-  val seqAlias = (expr: Expr, v: Seq[String]) => {
+  val seqAlias: (Expr, Seq[String]) => Seq[String] = (expr: Expr, v: Seq[String]) => {
     expr match {
       case apr: AliasableExpr => v ++ apr.alias
       case _ => v
     }
   }
-  val combAlias = (a: Seq[String], b: Seq[String]) => a ++ b
+  val combAlias: (Seq[String], Seq[String]) => Seq[String] = (a: Seq[String], b: Seq[String]) =>
+    a ++ b
 
   private val exprs = expr.exprs
-  private def genAlias(idx: Int): String = s"alias_${idx}"
-  val selectionPairs = exprs.zipWithIndex.map { pair =>
+  private def genAlias(idx: Int): String = s"alias_$idx"
+  val selectionPairs: Seq[(Expr, String)] = exprs.zipWithIndex.map { pair =>
     val (pr, idx) = pair
     val res = pr.preOrderTraverseDepthFirst(Seq[String]())(seqAlias, combAlias)
     (pr, res.headOption.getOrElse(genAlias(idx)))
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/preproc/PreProcParamMaker.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/preproc/PreProcParamMaker.scala
index ca93710..546e41a 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/preproc/PreProcParamMaker.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/preproc/PreProcParamMaker.scala
@@ -21,22 +21,25 @@
 import org.apache.griffin.measure.configuration.enums.DslType._
 
 /**
-  * generate each entity pre-proc params by template defined in pre-proc param
-  */
+ * generate each entity pre-proc params by template defined in pre-proc param
+ */
 object PreProcParamMaker {
 
-  case class StringAnyMap ( values: Map[String, Any] )
+  case class StringAnyMap(values: Map[String, Any])
 
-  def makePreProcRules(rules: Seq[RuleParam],
-                       suffix: String, dfName: String): (Seq[RuleParam], String) = {
+  def makePreProcRules(
+      rules: Seq[RuleParam],
+      suffix: String,
+      dfName: String): (Seq[RuleParam], String) = {
     val len = rules.size
-    val (newRules, _) = rules.zipWithIndex.foldLeft((Nil: Seq[RuleParam], dfName)) { (ret, pair) =>
-      val (rls, prevOutDfName) = ret
-      val (rule, i) = pair
-      val inName = rule.getInDfName(prevOutDfName)
-      val outName = if (i == len - 1) dfName else rule.getOutDfName(genNameWithIndex(dfName, i))
-      val ruleWithNames = rule.replaceInOutDfName(inName, outName)
-      (rls :+ makeNewPreProcRule(ruleWithNames, suffix), outName)
+    val (newRules, _) = rules.zipWithIndex.foldLeft((Nil: Seq[RuleParam], dfName)) {
+      (ret, pair) =>
+        val (rls, prevOutDfName) = ret
+        val (rule, i) = pair
+        val inName = rule.getInDfName(prevOutDfName)
+        val outName = if (i == len - 1) dfName else rule.getOutDfName(genNameWithIndex(dfName, i))
+        val ruleWithNames = rule.replaceInOutDfName(inName, outName)
+        (rls :+ makeNewPreProcRule(ruleWithNames, suffix), outName)
     }
     (newRules, withSuffix(dfName, suffix))
   }
@@ -53,14 +56,14 @@
     }
   }
 
-  private def genNameWithIndex(name: String, i: Int): String = s"${name}${i}"
+  private def genNameWithIndex(name: String, i: Int): String = s"$name$i"
 
   private def replaceDfNameSuffix(str: String, dfName: String, suffix: String): String = {
-    val regexStr = s"(?i)${dfName}"
+    val regexStr = s"(?i)$dfName"
     val replaceDfName = withSuffix(dfName, suffix)
     str.replaceAll(regexStr, replaceDfName)
   }
 
-  def withSuffix(str: String, suffix: String): String = s"${str}_${suffix}"
+  def withSuffix(str: String, suffix: String): String = s"${str}_$suffix"
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/builder/udf/GriffinUDFs.scala b/measure/src/main/scala/org/apache/griffin/measure/step/builder/udf/GriffinUDFs.scala
index b3e813d..53e9fd6 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/builder/udf/GriffinUDFs.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/builder/udf/GriffinUDFs.scala
@@ -27,8 +27,8 @@
 }
 
 /**
-  * user defined functions extension
-  */
+ * user defined functions extension
+ */
 object GriffinUDFs {
 
   def register(sparkSession: SparkSession): Unit = {
@@ -52,11 +52,10 @@
 }
 
 /**
-  * aggregation functions extension
-  */
+ * aggregation functions extension
+ */
 object GriffinUDAggFs {
 
-  def register(sparkSession: SparkSession): Unit = {
-  }
+  def register(sparkSession: SparkSession): Unit = {}
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/read/ReadStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/read/ReadStep.scala
index 39a3576..11582d8 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/read/ReadStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/read/ReadStep.scala
@@ -29,14 +29,14 @@
   val cache: Boolean
 
   def execute(context: DQContext): Boolean = {
-    info(s"read data source [${name}]")
+    info(s"read data source [$name]")
     read(context) match {
       case Some(df) =>
 //        if (needCache) context.dataFrameCache.cacheDataFrame(name, df)
         context.runTimeTableRegister.registerTable(name, df)
         true
       case _ =>
-        warn(s"read data source [${name}] fails")
+        warn(s"read data source [$name] fails")
         false
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/read/UnionReadStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/read/UnionReadStep.scala
index 660eaee..84c320c 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/read/UnionReadStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/read/UnionReadStep.scala
@@ -22,9 +22,7 @@
 import org.apache.griffin.measure.context.DQContext
 import org.apache.griffin.measure.utils.DataFrameUtil._
 
-case class UnionReadStep(name: String,
-                         readSteps: Seq[ReadStep]
-                        ) extends ReadStep {
+case class UnionReadStep(name: String, readSteps: Seq[ReadStep]) extends ReadStep {
 
   val config: Map[String, Any] = Map()
   val cache: Boolean = false
@@ -33,7 +31,7 @@
     val dfOpts = readSteps.map { readStep =>
       readStep.read(context)
     }
-    if (dfOpts.size > 0) {
+    if (dfOpts.nonEmpty) {
       dfOpts.reduce((a, b) => unionDfOpts(a, b))
     } else None
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOps.scala b/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOps.scala
index 1b01387..82bc874 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOps.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOps.scala
@@ -29,8 +29,8 @@
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * pre-defined data frame operations
-  */
+ * pre-defined data frame operations
+ */
 object DataFrameOps {
 
   final val _fromJson = "from_json"
@@ -45,15 +45,16 @@
     val _matchedFraction = "matchedFraction"
   }
 
-  def fromJson(sparkSession: SparkSession,
-               inputDfName: String,
-               details: Map[String, Any]): DataFrame = {
+  def fromJson(
+      sparkSession: SparkSession,
+      inputDfName: String,
+      details: Map[String, Any]): DataFrame = {
     val _colName = "col.name"
     val colNameOpt = details.get(_colName).map(_.toString)
 
-    implicit val encoder = Encoders.STRING
+    implicit val encoder: Encoder[String] = Encoders.STRING
 
-    val df: DataFrame = sparkSession.table(s"`${inputDfName}`")
+    val df: DataFrame = sparkSession.table(s"`$inputDfName`")
     val rdd = colNameOpt match {
       case Some(colName: String) => df.map(r => r.getAs[String](colName))
       case _ => df.map(_.getAs[String](0))
@@ -61,10 +62,11 @@
     sparkSession.read.json(rdd) // slow process
   }
 
-  def accuracy(sparkSession: SparkSession,
-               inputDfName: String,
-               contextId: ContextId,
-               details: Map[String, Any]): DataFrame = {
+  def accuracy(
+      sparkSession: SparkSession,
+      inputDfName: String,
+      contextId: ContextId,
+      details: Map[String, Any]): DataFrame = {
     import AccuracyOprKeys._
 
     val miss = details.getStringOrKey(_miss)
@@ -78,11 +80,11 @@
       try {
         Some(r.getAs[Long](k))
       } catch {
-        case e: Throwable => None
+        case _: Throwable => None
       }
     }
 
-    val df = sparkSession.table(s"`${inputDfName}`")
+    val df = sparkSession.table(s"`$inputDfName`")
 
     val results = df.rdd.flatMap { row =>
       try {
@@ -92,29 +94,36 @@
         val ar = AccuracyMetric(missCount, totalCount)
         if (ar.isLegal) Some((tmst, ar)) else None
       } catch {
-        case e: Throwable => None
+        case _: Throwable => None
       }
     }.collect
 
     // cache and update results
-    val updatedResults = CacheResults.update(results.map{ pair =>
+    val updatedResults = CacheResults.update(results.map { pair =>
       val (t, r) = pair
       CacheResult(t, updateTime, r)
     })
 
     // generate metrics
-    val schema = StructType(Array(
-      StructField(ConstantColumns.tmst, LongType),
-      StructField(miss, LongType),
-      StructField(total, LongType),
-      StructField(matched, LongType),
-      StructField(matchedFraction, DoubleType),
-      StructField(ConstantColumns.record, BooleanType),
-      StructField(ConstantColumns.empty, BooleanType)
-    ))
+    val schema = StructType(
+      Array(
+        StructField(ConstantColumns.tmst, LongType),
+        StructField(miss, LongType),
+        StructField(total, LongType),
+        StructField(matched, LongType),
+        StructField(matchedFraction, DoubleType),
+        StructField(ConstantColumns.record, BooleanType),
+        StructField(ConstantColumns.empty, BooleanType)))
     val rows = updatedResults.map { r =>
       val ar = r.result.asInstanceOf[AccuracyMetric]
-      Row(r.timeStamp, ar.miss, ar.total, ar.getMatch, ar.matchFraction, !ar.initial, ar.eventual)
+      Row(
+        r.timeStamp,
+        ar.miss,
+        ar.total,
+        ar.getMatch,
+        ar.matchFraction,
+        !ar.initial,
+        ar.eventual())
     }.toArray
     val rowRdd = sparkSession.sparkContext.parallelize(rows)
     val retDf = sparkSession.createDataFrame(rowRdd, schema)
@@ -122,8 +131,11 @@
     retDf
   }
 
-  def clear(sparkSession: SparkSession, inputDfName: String, details: Map[String, Any]): DataFrame = {
-    val df = sparkSession.table(s"`${inputDfName}`")
+  def clear(
+      sparkSession: SparkSession,
+      inputDfName: String,
+      details: Map[String, Any]): DataFrame = {
+    val df = sparkSession.table(s"`$inputDfName`")
     val emptyRdd = sparkSession.sparkContext.emptyRDD[Row]
     sparkSession.createDataFrame(emptyRdd, df.schema)
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOpsTransformStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOpsTransformStep.scala
index 1892531..1b3fb33 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOpsTransformStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/transform/DataFrameOpsTransformStep.scala
@@ -21,15 +21,16 @@
 import org.apache.griffin.measure.step.write.WriteStep
 
 /**
-  * data frame ops transform step
-  */
-case class DataFrameOpsTransformStep[T <: WriteStep](name: String,
-                                     inputDfName: String,
-                                     rule: String,
-                                     details: Map[String, Any],
-                                     writeStepOpt: Option[T] = None,
-                                     cache: Boolean = false
-                                    ) extends TransformStep {
+ * data frame ops transform step
+ */
+case class DataFrameOpsTransformStep[T <: WriteStep](
+    name: String,
+    inputDfName: String,
+    rule: String,
+    details: Map[String, Any],
+    writeStepOpt: Option[T] = None,
+    cache: Boolean = false)
+    extends TransformStep {
 
   def doExecute(context: DQContext): Boolean = {
     val sparkSession = context.sparkSession
@@ -40,7 +41,7 @@
           DataFrameOps.accuracy(sparkSession, inputDfName, context.contextId, details)
 
         case DataFrameOps._clear => DataFrameOps.clear(sparkSession, inputDfName, details)
-        case _ => throw new Exception(s"df opr [ ${rule} ] not supported")
+        case _ => throw new Exception(s"df opr [ $rule ] not supported")
       }
       if (cache) context.dataFrameCache.cacheDataFrame(name, df)
       context.runTimeTableRegister.registerTable(name, df)
@@ -50,7 +51,7 @@
       }
     } catch {
       case e: Throwable =>
-        error(s"run data frame ops [ ${rule} ] error: ${e.getMessage}", e)
+        error(s"run data frame ops [ $rule ] error: ${e.getMessage}", e)
         false
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/transform/SparkSqlTransformStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/transform/SparkSqlTransformStep.scala
index 50780c6..fc0306f 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/transform/SparkSqlTransformStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/transform/SparkSqlTransformStep.scala
@@ -21,14 +21,15 @@
 import org.apache.griffin.measure.step.write.WriteStep
 
 /**
-  * spark sql transform step
-  */
-case class SparkSqlTransformStep[T <: WriteStep](name: String,
-                                                 rule: String,
-                                                 details: Map[String, Any],
-                                                 writeStepOpt: Option[T] = None,
-                                                 cache: Boolean = false
-                                                ) extends TransformStep {
+ * spark sql transform step
+ */
+case class SparkSqlTransformStep[T <: WriteStep](
+    name: String,
+    rule: String,
+    details: Map[String, Any],
+    writeStepOpt: Option[T] = None,
+    cache: Boolean = false)
+    extends TransformStep {
   def doExecute(context: DQContext): Boolean = {
     val sparkSession = context.sparkSession
     try {
@@ -41,7 +42,7 @@
       }
     } catch {
       case e: Throwable =>
-        error(s"run spark sql [ ${rule} ] error: ${e.getMessage}", e)
+        error(s"run spark sql [ $rule ] error: ${e.getMessage}", e)
         false
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/transform/TransformStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/transform/TransformStep.scala
index ae1e5a5..e2dfdd1 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/transform/TransformStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/transform/TransformStep.scala
@@ -17,13 +17,14 @@
 
 package org.apache.griffin.measure.step.transform
 
+import scala.collection.mutable
 import scala.collection.mutable.HashSet
 import scala.concurrent.ExecutionContext
 import scala.concurrent.Future
 import scala.concurrent.duration.Duration
 
 import org.apache.griffin.measure.context.DQContext
-import org.apache.griffin.measure.step.DQStep
+import org.apache.griffin.measure.step.{DQStep, DQStepStatus}
 import org.apache.griffin.measure.step.DQStepStatus._
 import org.apache.griffin.measure.utils.ThreadUtils
 
@@ -35,9 +36,9 @@
 
   val cache: Boolean
 
-  var status = PENDING
+  var status: DQStepStatus.Value = PENDING
 
-  val parentSteps = new HashSet[TransformStep]
+  val parentSteps = new mutable.HashSet[TransformStep]
 
   def doExecute(context: DQContext): Boolean
 
@@ -61,12 +62,12 @@
       Future.sequence(parentStepFutures)(implicitly, TransformStep.transformStepContext),
       Duration.Inf)
 
-    parentSteps.map(step => {
+    parentSteps.foreach(step => {
       while (step.status == RUNNING) {
         Thread.sleep(1000L)
       }
     })
-    val prepared = parentSteps.foldLeft(true)((ret, step) => ret && step.status == COMPLETE)
+    val prepared = parentSteps.forall(step => step.status == COMPLETE)
     if (prepared) {
       val res = doExecute(context)
       info(threadName + " end transform step : \n" + debugString())
@@ -91,7 +92,7 @@
   def debugString(level: Int = 0): String = {
     val stringBuffer = new StringBuilder
     if (level > 0) {
-      for (i <- 0 to level - 1) {
+      for (_ <- 0 until level) {
         stringBuffer.append("|   ")
       }
       stringBuffer.append("|---")
@@ -103,7 +104,6 @@
 }
 
 object TransformStep {
-  private[transform] val transformStepContext = ExecutionContext.fromExecutorService(
-    ThreadUtils.newDaemonCachedThreadPool("transform-step"))
+  private[transform] val transformStepContext =
+    ExecutionContext.fromExecutorService(ThreadUtils.newDaemonCachedThreadPool("transform-step"))
 }
-
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/write/DataSourceUpdateWriteStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/write/DataSourceUpdateWriteStep.scala
index c0c01bf..c1af659 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/write/DataSourceUpdateWriteStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/write/DataSourceUpdateWriteStep.scala
@@ -23,11 +23,9 @@
 import org.apache.griffin.measure.context.DQContext
 
 /**
-  * update data source streaming cache
-  */
-case class DataSourceUpdateWriteStep(dsName: String,
-                                     inputName: String
-                                    ) extends WriteStep {
+ * update data source streaming cache
+ */
+case class DataSourceUpdateWriteStep(dsName: String, inputName: String) extends WriteStep {
 
   val name: String = ""
   val writeTimestampOpt: Option[Long] = None
@@ -39,23 +37,23 @@
           .find(ds => StringUtils.equals(ds.name, dsName))
           .foreach(_.updateData(df))
       case _ =>
-        warn(s"update ${dsName} from ${inputName} fails")
+        warn(s"update $dsName from $inputName fails")
     }
     true
   }
 
   private def getDataFrame(context: DQContext, name: String): Option[DataFrame] = {
     try {
-      val df = context.sparkSession.table(s"`${name}`")
+      val df = context.sparkSession.table(s"`$name`")
       Some(df)
     } catch {
       case e: Throwable =>
-        error(s"get data frame ${name} fails", e)
+        error(s"get data frame $name fails", e)
         None
     }
   }
 
-  private def getDataSourceCacheUpdateDf(context: DQContext): Option[DataFrame]
-    = getDataFrame(context, inputName)
+  private def getDataSourceCacheUpdateDf(context: DQContext): Option[DataFrame] =
+    getDataFrame(context, inputName)
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/write/MetricFlushStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/write/MetricFlushStep.scala
index b31bdd3..40754e2 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/write/MetricFlushStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/write/MetricFlushStep.scala
@@ -20,8 +20,8 @@
 import org.apache.griffin.measure.context.DQContext
 
 /**
-  * flush final metric map in context and write
-  */
+ * flush final metric map in context and write
+ */
 case class MetricFlushStep() extends WriteStep {
 
   val name: String = ""
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/write/MetricWriteStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/write/MetricWriteStep.scala
index 6b99765..d43a265 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/write/MetricWriteStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/write/MetricWriteStep.scala
@@ -18,23 +18,29 @@
 package org.apache.griffin.measure.step.write
 
 import org.apache.griffin.measure.configuration.enums.{SimpleMode, TimestampMode}
-import org.apache.griffin.measure.configuration.enums.FlattenType.{ArrayFlattenType, EntriesFlattenType, FlattenType, MapFlattenType}
+import org.apache.griffin.measure.configuration.enums.FlattenType.{
+  ArrayFlattenType,
+  EntriesFlattenType,
+  FlattenType,
+  MapFlattenType
+}
 import org.apache.griffin.measure.context.DQContext
 import org.apache.griffin.measure.step.builder.ConstantColumns
 import org.apache.griffin.measure.utils.JsonUtil
 import org.apache.griffin.measure.utils.ParamUtil._
 
 /**
-  * write metrics into context metric wrapper
-  */
-case class MetricWriteStep(name: String,
-                           inputName: String,
-                           flattenType: FlattenType,
-                           writeTimestampOpt: Option[Long] = None
-                          ) extends WriteStep {
+ * write metrics into context metric wrapper
+ */
+case class MetricWriteStep(
+    name: String,
+    inputName: String,
+    flattenType: FlattenType,
+    writeTimestampOpt: Option[Long] = None)
+    extends WriteStep {
 
-  val emptyMetricMap = Map[Long, Map[String, Any]]()
-  val emptyMap = Map[String, Any]()
+  val emptyMetricMap: Map[Long, Map[String, Any]] = Map[Long, Map[String, Any]]()
+  val emptyMap: Map[String, Any] = Map[String, Any]()
 
   def execute(context: DQContext): Boolean = {
     val timestamp = writeTimestampOpt.getOrElse(context.contextId.timestamp)
@@ -75,35 +81,37 @@
 
   private def getMetricMaps(context: DQContext): Seq[Map[String, Any]] = {
     try {
-      val pdf = context.sparkSession.table(s"`${inputName}`")
+      val pdf = context.sparkSession.table(s"`$inputName`")
       val records = pdf.toJSON.collect()
-      if (records.size > 0) {
+      if (records.length > 0) {
         records.flatMap { rec =>
           try {
             val value = JsonUtil.toAnyMap(rec)
             Some(value)
           } catch {
-            case e: Throwable => None
+            case _: Throwable => None
           }
         }.toSeq
       } else Nil
     } catch {
       case e: Throwable =>
-        error(s"get metric ${name} fails", e)
+        error(s"get metric $name fails", e)
         Nil
     }
   }
 
-  private def flattenMetric(metrics: Seq[Map[String, Any]], name: String, flattenType: FlattenType
-                             ): Map[String, Any] = {
+  private def flattenMetric(
+      metrics: Seq[Map[String, Any]],
+      name: String,
+      flattenType: FlattenType): Map[String, Any] = {
     flattenType match {
       case EntriesFlattenType => metrics.headOption.getOrElse(emptyMap)
-      case ArrayFlattenType => Map[String, Any]((name -> metrics))
+      case ArrayFlattenType => Map[String, Any](name -> metrics)
       case MapFlattenType =>
         val v = metrics.headOption.getOrElse(emptyMap)
-        Map[String, Any]((name -> v))
+        Map[String, Any](name -> v)
       case _ =>
-        if (metrics.size > 1) Map[String, Any]((name -> metrics))
+        if (metrics.size > 1) Map[String, Any](name -> metrics)
         else metrics.headOption.getOrElse(emptyMap)
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/write/RecordWriteStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/write/RecordWriteStep.scala
index 80d6d31..01db3fe 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/write/RecordWriteStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/write/RecordWriteStep.scala
@@ -26,13 +26,14 @@
 import org.apache.griffin.measure.utils.JsonUtil
 
 /**
-  * write records needs to be sink
-  */
-case class RecordWriteStep(name: String,
-                           inputName: String,
-                           filterTableNameOpt: Option[String] = None,
-                           writeTimestampOpt: Option[Long] = None
-                          ) extends WriteStep {
+ * write records needs to be sink
+ */
+case class RecordWriteStep(
+    name: String,
+    inputName: String,
+    filterTableNameOpt: Option[String] = None,
+    writeTimestampOpt: Option[Long] = None)
+    extends WriteStep {
 
   def execute(context: DQContext): Boolean = {
     val timestamp = writeTimestampOpt.getOrElse(context.contextId.timestamp)
@@ -75,36 +76,36 @@
 
   private def getDataFrame(context: DQContext, name: String): Option[DataFrame] = {
     try {
-      val df = context.sparkSession.table(s"`${name}`")
+      val df = context.sparkSession.table(s"`$name`")
       Some(df)
     } catch {
       case e: Throwable =>
-        error(s"get data frame ${name} fails", e)
+        error(s"get data frame $name fails", e)
         None
     }
   }
 
-  private def getRecordDataFrame(context: DQContext): Option[DataFrame]
-    = getDataFrame(context, inputName)
+  private def getRecordDataFrame(context: DQContext): Option[DataFrame] =
+    getDataFrame(context, inputName)
 
-  private def getFilterTableDataFrame(context: DQContext): Option[DataFrame]
-    = filterTableNameOpt.flatMap(getDataFrame(context, _))
+  private def getFilterTableDataFrame(context: DQContext): Option[DataFrame] =
+    filterTableNameOpt.flatMap(getDataFrame(context, _))
 
   private def getBatchRecords(context: DQContext): Option[RDD[String]] = {
-    getRecordDataFrame(context).map(_.toJSON.rdd);
+    getRecordDataFrame(context).map(_.toJSON.rdd)
   }
 
-  private def getStreamingRecords(context: DQContext)
-    : (Option[RDD[(Long, Iterable[String])]], Set[Long])
-    = {
-    implicit val encoder = Encoders.tuple(Encoders.scalaLong, Encoders.STRING)
+  private def getStreamingRecords(
+      context: DQContext): (Option[RDD[(Long, Iterable[String])]], Set[Long]) = {
+    implicit val encoder: Encoder[(Long, String)] =
+      Encoders.tuple(Encoders.scalaLong, Encoders.STRING)
     val defTimestamp = context.contextId.timestamp
     getRecordDataFrame(context) match {
       case Some(df) =>
         val (filterFuncOpt, emptyTimestamps) = getFilterTableDataFrame(context) match {
           case Some(filterDf) =>
             // timestamps with empty flag
-            val tmsts: Array[(Long, Boolean)] = (filterDf.collect.flatMap { row =>
+            val tmsts: Array[(Long, Boolean)] = filterDf.collect.flatMap { row =>
               try {
                 val tmst = getTmst(row, defTimestamp)
                 val empty = row.getAs[Boolean](ConstantColumns.empty)
@@ -112,15 +113,15 @@
               } catch {
                 case _: Throwable => None
               }
-            })
+            }
             val emptyTmsts = tmsts.filter(_._2).map(_._1).toSet
             val recordTmsts = tmsts.filter(!_._2).map(_._1).toSet
-            val filterFuncOpt: Option[(Long) => Boolean] = if (recordTmsts.size > 0) {
+            val filterFuncOpt: Option[Long => Boolean] = if (recordTmsts.nonEmpty) {
               Some((t: Long) => recordTmsts.contains(t))
             } else None
 
             (filterFuncOpt, emptyTmsts)
-          case _ => (Some((t: Long) => true), Set[Long]())
+          case _ => (Some((_: Long) => true), Set[Long]())
         }
 
         // filter timestamps need to record
@@ -134,7 +135,7 @@
                   val str = JsonUtil.toJson(map)
                   Some((tmst, str))
                 } catch {
-                  case e: Throwable => None
+                  case _: Throwable => None
                 }
               } else None
             }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/write/SparkRowFormatter.scala b/measure/src/main/scala/org/apache/griffin/measure/step/write/SparkRowFormatter.scala
index 9a61c09..020d07d 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/write/SparkRowFormatter.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/write/SparkRowFormatter.scala
@@ -22,10 +22,9 @@
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.types.{ArrayType, DataType, StructField, StructType}
 
-
 /**
-  * spark row formatter
-  */
+ * spark row formatter
+ */
 object SparkRowFormatter {
 
   def formatRow(row: Row): Map[String, Any] = {
@@ -41,13 +40,14 @@
     paired.foldLeft(Map[String, Any]())((s, p) => s ++ formatItem(p))
   }
 
-  private def formatItem(p: Tuple2[StructField, Any]): Map[String, Any] = {
+  private def formatItem(p: (StructField, Any)): Map[String, Any] = {
     p match {
       case (sf, a) =>
         sf.dataType match {
           case ArrayType(et, _) =>
-            Map(sf.name ->
-              (if (a == null) a else formatArray(et, a.asInstanceOf[ArrayBuffer[Any]])))
+            Map(
+              sf.name ->
+                (if (a == null) a else formatArray(et, a.asInstanceOf[ArrayBuffer[Any]])))
           case StructType(s) =>
             Map(sf.name -> (if (a == null) a else formatStruct(s, a.asInstanceOf[Row])))
           case _ => Map(sf.name -> a)
diff --git a/measure/src/main/scala/org/apache/griffin/measure/step/write/WriteStep.scala b/measure/src/main/scala/org/apache/griffin/measure/step/write/WriteStep.scala
index e908459..3f535fe 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/step/write/WriteStep.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/step/write/WriteStep.scala
@@ -25,6 +25,6 @@
 
   val writeTimestampOpt: Option[Long]
 
-  override def getNames(): Seq[String] = Nil
+  override def getNames: Seq[String] = Nil
 
 }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/utils/DataFrameUtil.scala b/measure/src/main/scala/org/apache/griffin/measure/utils/DataFrameUtil.scala
index e066d64..346e891 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/utils/DataFrameUtil.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/utils/DataFrameUtil.scala
@@ -22,12 +22,11 @@
 
 object DataFrameUtil {
 
-  def unionDfOpts(dfOpt1: Option[DataFrame], dfOpt2: Option[DataFrame]
-                 ): Option[DataFrame] = {
+  def unionDfOpts(dfOpt1: Option[DataFrame], dfOpt2: Option[DataFrame]): Option[DataFrame] = {
     (dfOpt1, dfOpt2) match {
       case (Some(df1), Some(df2)) => Some(unionByName(df1, df2))
-      case (Some(df1), _) => dfOpt1
-      case (_, Some(df2)) => dfOpt2
+      case (Some(_), _) => dfOpt1
+      case (_, Some(_)) => dfOpt2
       case _ => None
     }
   }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/utils/FSUtil.scala b/measure/src/main/scala/org/apache/griffin/measure/utils/FSUtil.scala
index e36bb5a..b7f50e8 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/utils/FSUtil.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/utils/FSUtil.scala
@@ -52,7 +52,7 @@
     }
   }
 
-  private def getConfiguration(): Configuration = {
+  private def getConfiguration: Configuration = {
     val conf = new Configuration()
     conf.setBoolean("dfs.support.append", true)
 //    conf.set("fs.defaultFS", "hdfs://localhost")    // debug in hdfs localhost env
@@ -63,14 +63,14 @@
     val uriOpt = try {
       Some(new URI(path))
     } catch {
-      case e: Throwable => None
+      case _: Throwable => None
     }
     uriOpt.flatMap { uri =>
       if (uri.getScheme == null) {
         try {
           Some(new File(path).toURI)
         } catch {
-          case e: Throwable => None
+          case _: Throwable => None
         }
       } else Some(uri)
     }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/utils/HdfsUtil.scala b/measure/src/main/scala/org/apache/griffin/measure/utils/HdfsUtil.scala
index e750607..b936b84 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/utils/HdfsUtil.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/utils/HdfsUtil.scala
@@ -29,10 +29,10 @@
 
   def existPath(filePath: String): Boolean = {
     try {
-      implicit val path = new Path(filePath)
+      implicit val path: Path = new Path(filePath)
       getFS.exists(path)
     } catch {
-      case e: Throwable => false
+      case _: Throwable => false
     }
   }
 
@@ -42,31 +42,31 @@
   }
 
   def createFile(filePath: String): FSDataOutputStream = {
-    implicit val path = new Path(filePath)
+    implicit val path: Path = new Path(filePath)
     if (getFS.exists(path)) getFS.delete(path, true)
-    return getFS.create(path)
+    getFS.create(path)
   }
 
   def appendOrCreateFile(filePath: String): FSDataOutputStream = {
-    implicit val path = new Path(filePath)
+    implicit val path: Path = new Path(filePath)
     if (getFS.getConf.getBoolean("dfs.support.append", false) && getFS.exists(path)) {
       getFS.append(path)
     } else createFile(filePath)
   }
 
   def openFile(filePath: String): FSDataInputStream = {
-    implicit val path = new Path(filePath)
+    implicit val path: Path = new Path(filePath)
     getFS.open(path)
   }
 
   def writeContent(filePath: String, message: String): Unit = {
     val out = createFile(filePath)
     out.write(message.getBytes("utf-8"))
-    out.close
+    out.close()
   }
 
-  def withHdfsFile(filePath: String, appendIfExists: Boolean = true)
-                  (f: FSDataOutputStream => Unit): Unit = {
+  def withHdfsFile(filePath: String, appendIfExists: Boolean = true)(
+      f: FSDataOutputStream => Unit): Unit = {
     val out =
       if (appendIfExists) {
         appendOrCreateFile(filePath)
@@ -80,7 +80,7 @@
 
   def createEmptyFile(filePath: String): Unit = {
     val out = createFile(filePath)
-    out.close
+    out.close()
   }
 
   def getHdfsFilePath(parentPath: String, fileName: String): String = {
@@ -89,43 +89,45 @@
 
   def deleteHdfsPath(dirPath: String): Unit = {
     try {
-      implicit val path = new Path(dirPath)
+      implicit val path: Path = new Path(dirPath)
       if (getFS.exists(path)) getFS.delete(path, true)
     } catch {
-      case e: Throwable => error(s"delete path [${dirPath}] error: ${e.getMessage}", e)
+      case e: Throwable => error(s"delete path [$dirPath] error: ${e.getMessage}", e)
     }
   }
 
   def listSubPathsByType(
-                          dirPath: String,
-                          subType: String,
-                          fullPath: Boolean = false) : Iterable[String] = {
+      dirPath: String,
+      subType: String,
+      fullPath: Boolean = false): Iterable[String] = {
     if (existPath(dirPath)) {
       try {
-        implicit val path = new Path(dirPath)
+        implicit val path: Path = new Path(dirPath)
         val fileStatusArray = getFS.listStatus(path)
-        fileStatusArray.filter { fileStatus =>
-          subType match {
-            case "dir" => fileStatus.isDirectory
-            case "file" => fileStatus.isFile
-            case _ => true
+        fileStatusArray
+          .filter { fileStatus =>
+            subType match {
+              case "dir" => fileStatus.isDirectory
+              case "file" => fileStatus.isFile
+              case _ => true
+            }
           }
-        }.map { fileStatus =>
-          val fname = fileStatus.getPath.getName
-          if (fullPath) getHdfsFilePath(dirPath, fname) else fname
-        }
+          .map { fileStatus =>
+            val fname = fileStatus.getPath.getName
+            if (fullPath) getHdfsFilePath(dirPath, fname) else fname
+          }
       } catch {
         case e: Throwable =>
-          warn(s"list path [${dirPath}] warn: ${e.getMessage}", e)
+          warn(s"list path [$dirPath] warn: ${e.getMessage}", e)
           Nil
       }
     } else Nil
   }
 
   def listSubPathsByTypes(
-                           dirPath: String,
-                           subTypes: Iterable[String],
-                           fullPath: Boolean = false) : Iterable[String] = {
+      dirPath: String,
+      subTypes: Iterable[String],
+      fullPath: Boolean = false): Iterable[String] = {
     subTypes.flatMap { subType =>
       listSubPathsByType(dirPath, subType, fullPath)
     }
diff --git a/measure/src/main/scala/org/apache/griffin/measure/utils/HttpUtil.scala b/measure/src/main/scala/org/apache/griffin/measure/utils/HttpUtil.scala
index f2e17cd..66648d0 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/utils/HttpUtil.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/utils/HttpUtil.scala
@@ -17,6 +17,8 @@
 
 package org.apache.griffin.measure.utils
 
+import scala.util.matching.Regex
+
 import org.apache.http.client.methods.{HttpGet, HttpPost}
 import org.apache.http.entity.{ContentType, StringEntity}
 import org.apache.http.impl.client.HttpClientBuilder
@@ -24,32 +26,37 @@
 
 object HttpUtil {
 
-  val GET_REGEX = """^(?i)get$""".r
-  val POST_REGEX = """^(?i)post$""".r
-  val PUT_REGEX = """^(?i)put$""".r
-  val DELETE_REGEX = """^(?i)delete$""".r
+  val GET_REGEX: Regex = """^(?i)get$""".r
+  val POST_REGEX: Regex = """^(?i)post$""".r
+  val PUT_REGEX: Regex = """^(?i)put$""".r
+  val DELETE_REGEX: Regex = """^(?i)delete$""".r
 
-  def postData(url: String,
-               params: Map[String, Object],
-               headers: Map[String, Object],
-               data: String): Boolean = {
-    val response = Http(url).params(convertObjMap2StrMap(params))
-      .headers(convertObjMap2StrMap(headers)).postData(data).asString
+  def postData(
+      url: String,
+      params: Map[String, Object],
+      headers: Map[String, Object],
+      data: String): Boolean = {
+    val response = Http(url)
+      .params(convertObjMap2StrMap(params))
+      .headers(convertObjMap2StrMap(headers))
+      .postData(data)
+      .asString
 
     response.isSuccess
   }
 
-  def doHttpRequest(url: String,
-                  method: String,
-                  params: Map[String, Object],
-                  headers: Map[String, Object],
-                  data: String): Boolean = {
+  def doHttpRequest(
+      url: String,
+      method: String,
+      params: Map[String, Object],
+      headers: Map[String, Object],
+      data: String): Boolean = {
     val client = HttpClientBuilder.create.build
     method match {
       case POST_REGEX() =>
         val post = new HttpPost(url)
         convertObjMap2StrMap(headers) foreach (header => post.addHeader(header._1, header._2))
-        post.setEntity(new StringEntity(data, ContentType.APPLICATION_JSON));
+        post.setEntity(new StringEntity(data, ContentType.APPLICATION_JSON))
 
         // send the post request
         val response = client.execute(post)
@@ -65,13 +72,15 @@
     }
   }
 
-  def httpRequest(url: String,
-                  method: String,
-                  params: Map[String, Object],
-                  headers: Map[String, Object],
-                  data: String): Boolean = {
-    val httpReq = Http(url).params(convertObjMap2StrMap(params))
-        .headers(convertObjMap2StrMap(headers))
+  def httpRequest(
+      url: String,
+      method: String,
+      params: Map[String, Object],
+      headers: Map[String, Object],
+      data: String): Boolean = {
+    val httpReq = Http(url)
+      .params(convertObjMap2StrMap(params))
+      .headers(convertObjMap2StrMap(headers))
     method match {
       case POST_REGEX() =>
         val res = httpReq.postData(data).asString
diff --git a/measure/src/main/scala/org/apache/griffin/measure/utils/JsonUtil.scala b/measure/src/main/scala/org/apache/griffin/measure/utils/JsonUtil.scala
index 80387e2..db6bb63 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/utils/JsonUtil.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/utils/JsonUtil.scala
@@ -24,14 +24,13 @@
 import com.fasterxml.jackson.databind.{DeserializationFeature, ObjectMapper}
 import com.fasterxml.jackson.module.scala.DefaultScalaModule
 
-
 object JsonUtil {
   val mapper = new ObjectMapper()
   mapper.registerModule(DefaultScalaModule)
   mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
 
   def toJson(value: Map[Symbol, Any]): String = {
-    toJson(value map { case (k, v) => k.name -> v})
+    toJson(value map { case (k, v) => k.name -> v })
   }
 
   def toJson(value: Any): String = {
diff --git a/measure/src/main/scala/org/apache/griffin/measure/utils/ParamUtil.scala b/measure/src/main/scala/org/apache/griffin/measure/utils/ParamUtil.scala
index ba73b53..ccf8e00 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/utils/ParamUtil.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/utils/ParamUtil.scala
@@ -193,16 +193,18 @@
       params.get(key).flatMap(toBoolean).getOrElse(defValue)
     }
 
-    def getParamAnyMap(key: String, defValue: Map[String, Any]
-      = Map[String, Any]()): Map[String, Any] = {
+    def getParamAnyMap(
+        key: String,
+        defValue: Map[String, Any] = Map[String, Any]()): Map[String, Any] = {
       params.get(key) match {
         case Some(v: Map[_, _]) => v.map(pair => (pair._1.toString, pair._2))
         case _ => defValue
       }
     }
 
-    def getParamStringMap(key: String, defValue: Map[String, String]
-    = Map[String, String]()): Map[String, String] = {
+    def getParamStringMap(
+        key: String,
+        defValue: Map[String, String] = Map[String, String]()): Map[String, String] = {
       params.get(key) match {
         case Some(v: Map[_, _]) => v.map(pair => (pair._1.toString, pair._2.toString))
         case _ => defValue
@@ -236,4 +238,3 @@
   }
 
 }
-
diff --git a/measure/src/main/scala/org/apache/griffin/measure/utils/ThreadUtils.scala b/measure/src/main/scala/org/apache/griffin/measure/utils/ThreadUtils.scala
index 199ca6c..52c5f38 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/utils/ThreadUtils.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/utils/ThreadUtils.scala
@@ -21,7 +21,10 @@
 
 import scala.concurrent.{Awaitable, ExecutionContext, ExecutionContextExecutor}
 import scala.concurrent.duration.Duration
-import scala.concurrent.forkjoin.{ForkJoinPool => SForkJoinPool, ForkJoinWorkerThread => SForkJoinWorkerThread}
+import scala.concurrent.forkjoin.{
+  ForkJoinPool => SForkJoinPool,
+  ForkJoinWorkerThread => SForkJoinWorkerThread
+}
 import scala.util.control.NonFatal
 
 import com.google.common.util.concurrent.{MoreExecutors, ThreadFactoryBuilder}
@@ -59,7 +62,9 @@
    * are formatted as prefix-ID, where ID is a unique, sequentially assigned integer.
    */
   def newDaemonCachedThreadPool(
-      prefix: String, maxThreadNumber: Int, keepAliveSeconds: Int = 60): ThreadPoolExecutor = {
+      prefix: String,
+      maxThreadNumber: Int,
+      keepAliveSeconds: Int = 60): ThreadPoolExecutor = {
     val threadFactory = namedThreadFactory(prefix)
     val threadPool = new ThreadPoolExecutor(
       maxThreadNumber, // corePoolSize: the max number of threads to create before queuing the tasks
@@ -85,7 +90,8 @@
    * Wrapper over newSingleThreadExecutor.
    */
   def newDaemonSingleThreadExecutor(threadName: String): ExecutorService = {
-    val threadFactory = new ThreadFactoryBuilder().setDaemon(true).setNameFormat(threadName).build()
+    val threadFactory =
+      new ThreadFactoryBuilder().setDaemon(true).setNameFormat(threadName).build()
     Executors.newSingleThreadExecutor(threadFactory)
   }
 
@@ -93,7 +99,8 @@
    * Wrapper over ScheduledThreadPoolExecutor.
    */
   def newDaemonSingleThreadScheduledExecutor(threadName: String): ScheduledExecutorService = {
-    val threadFactory = new ThreadFactoryBuilder().setDaemon(true).setNameFormat(threadName).build()
+    val threadFactory =
+      new ThreadFactoryBuilder().setDaemon(true).setNameFormat(threadName).build()
     val executor = new ScheduledThreadPoolExecutor(1, threadFactory)
     // By default, a cancelled task is not automatically removed from the work queue until its delay
     // elapses. We have to enable it manually.
@@ -112,9 +119,7 @@
    *   at CallerClass.caller-method (sourcefile.scala)
    *   ...
    */
-  def runInNewThread[T](
-      threadName: String,
-      isDaemon: Boolean = true)(body: => T): T = {
+  def runInNewThread[T](threadName: String, isDaemon: Boolean = true)(body: => T): T = {
     @volatile var exception: Option[Throwable] = None
     @volatile var result: T = null.asInstanceOf[T]
 
@@ -137,18 +142,23 @@
         // Remove the part of the stack that shows method calls into this helper method
         // This means drop everything from the top until the stack element
         // ThreadUtils.runInNewThread(), and then drop that as well (hence the `drop(1)`).
-        val baseStackTrace = Thread.currentThread().getStackTrace().dropWhile(
-          ! _.getClassName.contains(this.getClass.getSimpleName)).drop(1)
+        val baseStackTrace = Thread
+          .currentThread()
+          .getStackTrace
+          .dropWhile(!_.getClassName.contains(this.getClass.getSimpleName))
+          .drop(1)
 
         // Remove the part of the new thread stack that shows methods call from this helper method
         val extraStackTrace = realException.getStackTrace.takeWhile(
-          ! _.getClassName.contains(this.getClass.getSimpleName))
+          !_.getClassName.contains(this.getClass.getSimpleName))
 
         // Combine the two stack traces, with a place holder just specifying that there
         // was a helper method used, without any further details of the helper
         val placeHolderStackElem = new StackTraceElement(
           s"... run in separate thread using ${ThreadUtils.getClass.getName.stripSuffix("$")} ..",
-          " ", "", -1)
+          " ",
+          "",
+          -1)
         val finalStackTrace = extraStackTrace ++ Seq(placeHolderStackElem) ++ baseStackTrace
 
         // Update the stack trace and rethrow the exception in the caller thread
@@ -165,12 +175,14 @@
   def newForkJoinPool(prefix: String, maxThreadNumber: Int): SForkJoinPool = {
     // Custom factory to set thread names
     val factory = new SForkJoinPool.ForkJoinWorkerThreadFactory {
-      override def newThread(pool: SForkJoinPool) =
+      override def newThread(pool: SForkJoinPool): SForkJoinWorkerThread =
         new SForkJoinWorkerThread(pool) {
           setName(prefix + "-" + super.getName)
         }
     }
-    new SForkJoinPool(maxThreadNumber, factory,
+    new SForkJoinPool(
+      maxThreadNumber,
+      factory,
       null, // handler
       false // asyncMode
     )
diff --git a/measure/src/main/scala/org/apache/griffin/measure/utils/TimeUtil.scala b/measure/src/main/scala/org/apache/griffin/measure/utils/TimeUtil.scala
index 303399a..38733d7 100644
--- a/measure/src/main/scala/org/apache/griffin/measure/utils/TimeUtil.scala
+++ b/measure/src/main/scala/org/apache/griffin/measure/utils/TimeUtil.scala
@@ -26,18 +26,19 @@
 
   private object Units {
     case class TimeUnit(name: String, shortName: String, ut: Long, regex: Regex) {
-      def toMs(t: Long) : Long = t * ut
-      def fromMs(ms: Long) : Long = ms / ut
-      def fitUnit(ms: Long) : Boolean = (ms % ut == 0)
+      def toMs(t: Long): Long = t * ut
+      def fromMs(ms: Long): Long = ms / ut
+      def fitUnit(ms: Long): Boolean = ms % ut == 0
     }
 
-    val dayUnit = TimeUnit("day", "d", 24 * 60 * 60 * 1000, """^(?i)d(?:ay)?$""".r)
-    val hourUnit = TimeUnit("hour", "h", 60 * 60 * 1000, """^(?i)h(?:our|r)?$""".r)
-    val minUnit = TimeUnit("minute", "m", 60 * 1000, """^(?i)m(?:in(?:ute)?)?$""".r)
-    val secUnit = TimeUnit("second", "s", 1000, """^(?i)s(?:ec(?:ond)?)?$""".r)
-    val msUnit = TimeUnit("millisecond", "ms", 1, """^(?i)m(?:illi)?s(?:ec(?:ond)?)?$""".r)
+    val dayUnit: TimeUnit = TimeUnit("day", "d", 24 * 60 * 60 * 1000, """^(?i)d(?:ay)?$""".r)
+    val hourUnit: TimeUnit = TimeUnit("hour", "h", 60 * 60 * 1000, """^(?i)h(?:our|r)?$""".r)
+    val minUnit: TimeUnit = TimeUnit("minute", "m", 60 * 1000, """^(?i)m(?:in(?:ute)?)?$""".r)
+    val secUnit: TimeUnit = TimeUnit("second", "s", 1000, """^(?i)s(?:ec(?:ond)?)?$""".r)
+    val msUnit: TimeUnit =
+      TimeUnit("millisecond", "ms", 1, """^(?i)m(?:illi)?s(?:ec(?:ond)?)?$""".r)
 
-    val timeUnits = dayUnit :: hourUnit :: minUnit :: secUnit :: msUnit :: Nil
+    val timeUnits: List[TimeUnit] = dayUnit :: hourUnit :: minUnit :: secUnit :: msUnit :: Nil
   }
   import Units._
 
@@ -57,16 +58,16 @@
               case minUnit.regex() => minUnit.toMs(t)
               case secUnit.regex() => secUnit.toMs(t)
               case msUnit.regex() => msUnit.toMs(t)
-              case _ => throw new Exception(s"${timeString} is invalid time format")
+              case _ => throw new Exception(s"$timeString is invalid time format")
             }
           case PureTimeRegex(time) =>
             val t = time.toLong
             msUnit.toMs(t)
-          case _ => throw new Exception(s"${timeString} is invalid time format")
+          case _ => throw new Exception(s"$timeString is invalid time format")
         }
       } match {
         case Success(v) => Some(v)
-        case Failure(ex) => None
+        case Failure(_) => None
       }
     }
     value
@@ -101,7 +102,7 @@
     val unit = matchedUnitOpt.getOrElse(msUnit)
     val unitTime = unit.fromMs(t)
     val unitStr = unit.shortName
-    s"${unitTime}${unitStr}"
+    s"$unitTime$unitStr"
   }
 
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/SparkSuiteBase.scala b/measure/src/test/scala/org/apache/griffin/measure/SparkSuiteBase.scala
index 4dbf38c..dbed89c 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/SparkSuiteBase.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/SparkSuiteBase.scala
@@ -65,11 +65,11 @@
 
   def cleanTestHiveData(): Unit = {
     val metastoreDB = new File("metastore_db")
-    if(metastoreDB.exists) {
+    if (metastoreDB.exists) {
       FileUtils.forceDelete(metastoreDB)
     }
     val sparkWarehouse = new File("spark-warehouse")
-    if(sparkWarehouse.exists) {
+    if (sparkWarehouse.exists) {
       FileUtils.forceDelete(sparkWarehouse)
     }
   }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamEnumReaderSpec.scala b/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamEnumReaderSpec.scala
index a30d51e..b881dff 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamEnumReaderSpec.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamEnumReaderSpec.scala
@@ -19,8 +19,12 @@
 
 import org.scalatest.{FlatSpec, Matchers}
 
-import org.apache.griffin.measure.configuration.dqdefinition.{DQConfig, EvaluateRuleParam, RuleOutputParam, RuleParam}
-
+import org.apache.griffin.measure.configuration.dqdefinition.{
+  DQConfig,
+  EvaluateRuleParam,
+  RuleOutputParam,
+  RuleParam
+}
 
 class ParamEnumReaderSpec extends FlatSpec with Matchers {
   import org.apache.griffin.measure.configuration.enums.DslType._
@@ -28,40 +32,33 @@
     val validDslSparkSqlValues =
       Seq("spark-sql", "spark-SQL", "SPARK-SQL", "sparksql")
     validDslSparkSqlValues foreach { x =>
-      val ruleParam = new RuleParam(x, "accuracy")
+      val ruleParam = RuleParam(x, "accuracy")
       ruleParam.getDslType should ===(SparkSql)
     }
     val invalidDslSparkSqlValues = Seq("spark", "sql", "")
     invalidDslSparkSqlValues foreach { x =>
-      val ruleParam = new RuleParam(x, "accuracy")
-      ruleParam.getDslType should not be (SparkSql)
+      val ruleParam = RuleParam(x, "accuracy")
+      ruleParam.getDslType should not be SparkSql
     }
 
     val validDslGriffinValues =
       Seq("griffin-dsl", "griffindsl", "griFfin-dsl", "")
     validDslGriffinValues foreach { x =>
-      val ruleParam = new RuleParam(x, "accuracy")
+      val ruleParam = RuleParam(x, "accuracy")
       ruleParam.getDslType should ===(GriffinDsl)
     }
 
-    val validDslDfOpsValues = Seq(
-      "df-ops",
-      "dfops",
-      "DFOPS",
-      "df-opr",
-      "dfopr",
-      "df-operations",
-      "dfoperations"
-    )
+    val validDslDfOpsValues =
+      Seq("df-ops", "dfops", "DFOPS", "df-opr", "dfopr", "df-operations", "dfoperations")
     validDslDfOpsValues foreach { x =>
-      val ruleParam = new RuleParam(x, "accuracy")
+      val ruleParam = RuleParam(x, "accuracy")
       ruleParam.getDslType should ===(DataFrameOpsType)
     }
 
     val invalidDslDfOpsValues = Seq("df-oprts", "-")
     invalidDslDfOpsValues foreach { x =>
-      val ruleParam = new RuleParam(x, "accuracy")
-      ruleParam.getDslType should not be (DataFrameOpsType)
+      val ruleParam = RuleParam(x, "accuracy")
+      ruleParam.getDslType should not be DataFrameOpsType
     }
   }
 
@@ -69,104 +66,104 @@
     import org.apache.griffin.measure.configuration.enums.DslType._
     val dslGriffinDslValues = Seq("griffin", "dsl")
     dslGriffinDslValues foreach { x =>
-      val ruleParam = new RuleParam(x, "accuracy")
+      val ruleParam = RuleParam(x, "accuracy")
       ruleParam.getDslType should be(GriffinDsl)
     }
   }
 
   "dqtype" should "be parsed to predefined set of values" in {
     import org.apache.griffin.measure.configuration.enums.DqType._
-    var ruleParam = new RuleParam("griffin-dsl", "accuracy")
+    var ruleParam = RuleParam("griffin-dsl", "accuracy")
     ruleParam.getDqType should be(Accuracy)
-    ruleParam = new RuleParam("griffin-dsl", "accu")
-    ruleParam.getDqType should not be (Accuracy)
+    ruleParam = RuleParam("griffin-dsl", "accu")
+    ruleParam.getDqType should not be Accuracy
     ruleParam.getDqType should be(Unknown)
 
-    ruleParam = new RuleParam("griffin-dsl", "profiling")
+    ruleParam = RuleParam("griffin-dsl", "profiling")
     ruleParam.getDqType should be(Profiling)
-    ruleParam = new RuleParam("griffin-dsl", "profilin")
-    ruleParam.getDqType should not be (Profiling)
+    ruleParam = RuleParam("griffin-dsl", "profilin")
+    ruleParam.getDqType should not be Profiling
     ruleParam.getDqType should be(Unknown)
 
-    ruleParam = new RuleParam("griffin-dsl", "TIMELINESS")
+    ruleParam = RuleParam("griffin-dsl", "TIMELINESS")
     ruleParam.getDqType should be(Timeliness)
-    ruleParam = new RuleParam("griffin-dsl", "timeliness ")
-    ruleParam.getDqType should not be (Timeliness)
+    ruleParam = RuleParam("griffin-dsl", "timeliness ")
+    ruleParam.getDqType should not be Timeliness
     ruleParam.getDqType should be(Unknown)
 
-    ruleParam = new RuleParam("griffin-dsl", "UNIQUENESS")
+    ruleParam = RuleParam("griffin-dsl", "UNIQUENESS")
     ruleParam.getDqType should be(Uniqueness)
-    ruleParam = new RuleParam("griffin-dsl", "UNIQUE")
-    ruleParam.getDqType should not be (Uniqueness)
+    ruleParam = RuleParam("griffin-dsl", "UNIQUE")
+    ruleParam.getDqType should not be Uniqueness
     ruleParam.getDqType should be(Unknown)
 
-    ruleParam = new RuleParam("griffin-dsl", "Duplicate")
+    ruleParam = RuleParam("griffin-dsl", "Duplicate")
     ruleParam.getDqType should be(Uniqueness)
-    ruleParam = new RuleParam("griffin-dsl", "duplica")
-    ruleParam.getDqType should not be (Duplicate)
+    ruleParam = RuleParam("griffin-dsl", "duplica")
+    ruleParam.getDqType should not be Duplicate
     ruleParam.getDqType should be(Unknown)
 
-    ruleParam = new RuleParam("griffin-dsl", "COMPLETENESS")
+    ruleParam = RuleParam("griffin-dsl", "COMPLETENESS")
     ruleParam.getDqType should be(Completeness)
-    ruleParam = new RuleParam("griffin-dsl", "complete")
-    ruleParam.getDqType should not be (Completeness)
+    ruleParam = RuleParam("griffin-dsl", "complete")
+    ruleParam.getDqType should not be Completeness
     ruleParam.getDqType should be(Unknown)
 
-    ruleParam = new RuleParam("griffin-dsl", "")
+    ruleParam = RuleParam("griffin-dsl", "")
     ruleParam.getDqType should be(Unknown)
-    ruleParam = new RuleParam("griffin-dsl", "duplicate")
-    ruleParam.getDqType should not be (Unknown)
+    ruleParam = RuleParam("griffin-dsl", "duplicate")
+    ruleParam.getDqType should not be Unknown
   }
 
   "outputtype" should "be valid" in {
     import org.apache.griffin.measure.configuration.enums.OutputType._
-    var ruleOutputParam = new RuleOutputParam("metric", "", "map")
+    var ruleOutputParam = RuleOutputParam("metric", "", "map")
     ruleOutputParam.getOutputType should be(MetricOutputType)
-    ruleOutputParam = new RuleOutputParam("metr", "", "map")
-    ruleOutputParam.getOutputType should not be (MetricOutputType)
+    ruleOutputParam = RuleOutputParam("metr", "", "map")
+    ruleOutputParam.getOutputType should not be MetricOutputType
     ruleOutputParam.getOutputType should be(UnknownOutputType)
 
-    ruleOutputParam = new RuleOutputParam("record", "", "map")
+    ruleOutputParam = RuleOutputParam("record", "", "map")
     ruleOutputParam.getOutputType should be(RecordOutputType)
-    ruleOutputParam = new RuleOutputParam("rec", "", "map")
-    ruleOutputParam.getOutputType should not be (RecordOutputType)
+    ruleOutputParam = RuleOutputParam("rec", "", "map")
+    ruleOutputParam.getOutputType should not be RecordOutputType
     ruleOutputParam.getOutputType should be(UnknownOutputType)
 
-    ruleOutputParam = new RuleOutputParam("dscupdate", "", "map")
+    ruleOutputParam = RuleOutputParam("dscupdate", "", "map")
     ruleOutputParam.getOutputType should be(DscUpdateOutputType)
-    ruleOutputParam = new RuleOutputParam("dsc", "", "map")
-    ruleOutputParam.getOutputType should not be (DscUpdateOutputType)
+    ruleOutputParam = RuleOutputParam("dsc", "", "map")
+    ruleOutputParam.getOutputType should not be DscUpdateOutputType
     ruleOutputParam.getOutputType should be(UnknownOutputType)
 
   }
 
   "flattentype" should "be valid" in {
     import org.apache.griffin.measure.configuration.enums.FlattenType._
-    var ruleOutputParam = new RuleOutputParam("metric", "", "map")
+    var ruleOutputParam = RuleOutputParam("metric", "", "map")
     ruleOutputParam.getFlatten should be(MapFlattenType)
-    ruleOutputParam = new RuleOutputParam("metric", "", "metr")
-    ruleOutputParam.getFlatten should not be (MapFlattenType)
+    ruleOutputParam = RuleOutputParam("metric", "", "metr")
+    ruleOutputParam.getFlatten should not be MapFlattenType
     ruleOutputParam.getFlatten should be(DefaultFlattenType)
 
-    ruleOutputParam = new RuleOutputParam("metric", "", "array")
+    ruleOutputParam = RuleOutputParam("metric", "", "array")
     ruleOutputParam.getFlatten should be(ArrayFlattenType)
-    ruleOutputParam = new RuleOutputParam("metric", "", "list")
+    ruleOutputParam = RuleOutputParam("metric", "", "list")
     ruleOutputParam.getFlatten should be(ArrayFlattenType)
-    ruleOutputParam = new RuleOutputParam("metric", "", "arrays")
-    ruleOutputParam.getFlatten should not be (ArrayFlattenType)
+    ruleOutputParam = RuleOutputParam("metric", "", "arrays")
+    ruleOutputParam.getFlatten should not be ArrayFlattenType
     ruleOutputParam.getFlatten should be(DefaultFlattenType)
 
-    ruleOutputParam = new RuleOutputParam("metric", "", "entries")
+    ruleOutputParam = RuleOutputParam("metric", "", "entries")
     ruleOutputParam.getFlatten should be(EntriesFlattenType)
-    ruleOutputParam = new RuleOutputParam("metric", "", "entry")
-    ruleOutputParam.getFlatten should not be (EntriesFlattenType)
+    ruleOutputParam = RuleOutputParam("metric", "", "entry")
+    ruleOutputParam.getFlatten should not be EntriesFlattenType
     ruleOutputParam.getFlatten should be(DefaultFlattenType)
   }
 
   "sinktype" should "be valid" in {
     import org.mockito.Mockito._
     import org.apache.griffin.measure.configuration.enums.SinkType._
-    var dqConfig = new DQConfig(
+    var dqConfig = DQConfig(
       "test",
       1234,
       "",
@@ -182,36 +179,14 @@
         "Http",
         "MongoDB",
         "mongo",
-        "hdfs"
-      )
-    )
-    dqConfig.getValidSinkTypes should be(
-      Seq(
-        Console,
-        ElasticSearch,
-        MongoDB,
-        Hdfs
-      )
-    )
-    dqConfig = new DQConfig(
-      "test",
-      1234,
-      "",
-      Nil,
-      mock(classOf[EvaluateRuleParam]),
-      List("Consol", "Logg")
-    )
-    dqConfig.getValidSinkTypes should not be (Seq(Console))
+        "hdfs"))
+    dqConfig.getValidSinkTypes should be(Seq(Console, ElasticSearch, MongoDB, Hdfs))
+    dqConfig =
+      DQConfig("test", 1234, "", Nil, mock(classOf[EvaluateRuleParam]), List("Consol", "Logg"))
+    dqConfig.getValidSinkTypes should not be Seq(Console)
     dqConfig.getValidSinkTypes should be(Seq(ElasticSearch))
 
-    dqConfig = new DQConfig(
-      "test",
-      1234,
-      "",
-      Nil,
-      mock(classOf[EvaluateRuleParam]),
-      List("")
-    )
+    dqConfig = DQConfig("test", 1234, "", Nil, mock(classOf[EvaluateRuleParam]), List(""))
     dqConfig.getValidSinkTypes should be(Seq(ElasticSearch))
   }
 
diff --git a/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamFileReaderSpec.scala b/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamFileReaderSpec.scala
index 38d8db3..a05df90 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamFileReaderSpec.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamFileReaderSpec.scala
@@ -23,17 +23,16 @@
 import org.apache.griffin.measure.configuration.dqdefinition.DQConfig
 import org.apache.griffin.measure.configuration.enums.DslType.GriffinDsl
 
-
-class ParamFileReaderSpec extends FlatSpec with Matchers{
-
+class ParamFileReaderSpec extends FlatSpec with Matchers {
 
   "params " should "be parsed from a valid file" in {
-    val reader: ParamReader = ParamFileReader(getClass.getResource("/_accuracy-batch-griffindsl.json").getFile)
+    val reader: ParamReader =
+      ParamFileReader(getClass.getResource("/_accuracy-batch-griffindsl.json").getFile)
     val params = reader.readConfig[DQConfig]
     params match {
       case Success(v) =>
-        v.getEvaluateRule.getRules(0).getDslType should === (GriffinDsl)
-        v.getEvaluateRule.getRules(0).getOutDfName() should === ("accu")
+        v.getEvaluateRule.getRules.head.getDslType should ===(GriffinDsl)
+        v.getEvaluateRule.getRules.head.getOutDfName() should ===("accu")
       case Failure(_) =>
         fail("it should not happen")
     }
@@ -41,39 +40,41 @@
   }
 
   it should "fail for an invalid file" in {
-    val reader: ParamReader = ParamFileReader(getClass.
-      getResource("/invalidconfigs/missingrule_accuracy_batch_sparksql.json").getFile)
+    val reader: ParamReader = ParamFileReader(
+      getClass.getResource("/invalidconfigs/missingrule_accuracy_batch_sparksql.json").getFile)
     val params = reader.readConfig[DQConfig]
     params match {
       case Success(_) =>
         fail("it is an invalid config file")
       case Failure(e) =>
-        e.getMessage contains ("evaluate.rule should not be null")
+        e.getMessage contains "evaluate.rule should not be null"
     }
 
   }
 
   it should "fail for an invalid completeness json file" in {
-    val reader: ParamFileReader = ParamFileReader(getClass.
-      getResource("/invalidconfigs/invalidtype_completeness_batch_griffindal.json").getFile)
+    val reader: ParamFileReader = ParamFileReader(
+      getClass
+        .getResource("/invalidconfigs/invalidtype_completeness_batch_griffindal.json")
+        .getFile)
     val params = reader.readConfig[DQConfig]
     params match {
       case Success(_) =>
         fail("it is an invalid config file")
       case Failure(e) =>
-        e.getMessage contains ("error error.conf type")
+        e.getMessage contains "error error.conf type"
     }
   }
 
   it should "be parsed from a valid errorconf completeness json file" in {
-    val reader: ParamReader = ParamFileReader(getClass.
-      getResource("/_completeness_errorconf-batch-griffindsl.json").getFile)
+    val reader: ParamReader = ParamFileReader(
+      getClass.getResource("/_completeness_errorconf-batch-griffindsl.json").getFile)
     val params = reader.readConfig[DQConfig]
     params match {
       case Success(v) =>
-        v.getEvaluateRule.getRules(0).getErrorConfs.length should === (2)
-        v.getEvaluateRule.getRules(0).getErrorConfs(0).getColumnName.get should === ("user")
-        v.getEvaluateRule.getRules(0).getErrorConfs(1).getColumnName.get should === ("name")
+        v.getEvaluateRule.getRules.head.getErrorConfs.length should ===(2)
+        v.getEvaluateRule.getRules.head.getErrorConfs.head.getColumnName.get should ===("user")
+        v.getEvaluateRule.getRules.head.getErrorConfs(1).getColumnName.get should ===("name")
       case Failure(_) =>
         fail("it should not happen")
     }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamJsonReaderSpec.scala b/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamJsonReaderSpec.scala
index 54bd95b..86d68b5 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamJsonReaderSpec.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/configuration/dqdefinition/reader/ParamJsonReaderSpec.scala
@@ -25,12 +25,11 @@
 import org.apache.griffin.measure.configuration.dqdefinition.DQConfig
 import org.apache.griffin.measure.configuration.enums.DslType.GriffinDsl
 
-
-class ParamJsonReaderSpec extends FlatSpec with Matchers{
-
+class ParamJsonReaderSpec extends FlatSpec with Matchers {
 
   "params " should "be parsed from a valid file" in {
-    val bufferedSource = Source.fromFile(getClass.getResource("/_accuracy-batch-griffindsl.json").getFile)
+    val bufferedSource =
+      Source.fromFile(getClass.getResource("/_accuracy-batch-griffindsl.json").getFile)
     val jsonString = bufferedSource.getLines().mkString
     bufferedSource.close
 
@@ -38,8 +37,8 @@
     val params = reader.readConfig[DQConfig]
     params match {
       case Success(v) =>
-        v.getEvaluateRule.getRules(0).getDslType should === (GriffinDsl)
-        v.getEvaluateRule.getRules(0).getOutDfName() should === ("accu")
+        v.getEvaluateRule.getRules.head.getDslType should ===(GriffinDsl)
+        v.getEvaluateRule.getRules.head.getOutDfName() should ===("accu")
       case Failure(_) =>
         fail("it should not happen")
     }
@@ -47,8 +46,8 @@
   }
 
   it should "fail for an invalid file" in {
-    val bufferedSource = Source.fromFile(getClass.
-      getResource("/invalidconfigs/missingrule_accuracy_batch_sparksql.json").getFile)
+    val bufferedSource = Source.fromFile(
+      getClass.getResource("/invalidconfigs/missingrule_accuracy_batch_sparksql.json").getFile)
     val jsonString = bufferedSource.getLines().mkString
     bufferedSource.close
 
@@ -58,11 +57,9 @@
       case Success(_) =>
         fail("it is an invalid config file")
       case Failure(e) =>
-        e.getMessage should include ("evaluate.rule should not be null")
+        e.getMessage should include("evaluate.rule should not be null")
     }
 
   }
 
 }
-
-
diff --git a/measure/src/test/scala/org/apache/griffin/measure/context/DataFrameCacheTest.scala b/measure/src/test/scala/org/apache/griffin/measure/context/DataFrameCacheTest.scala
index 7465917..898c8fd 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/context/DataFrameCacheTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/context/DataFrameCacheTest.scala
@@ -26,11 +26,11 @@
 class DataFrameCacheTest extends FlatSpec with Matchers with SparkSuiteBase {
 
   def createDataFrame(arr: Seq[Int]): DataFrame = {
-    val schema = StructType(Array(
-      StructField("id", LongType),
-      StructField("name", StringType),
-      StructField("age", IntegerType)
-    ))
+    val schema = StructType(
+      Array(
+        StructField("id", LongType),
+        StructField("name", StringType),
+        StructField("age", IntegerType)))
     val rows = arr.map { i =>
       Row(i.toLong, s"name_$i", i + 15)
     }
@@ -48,22 +48,21 @@
     dfCache.cacheDataFrame("t1", df1)
     dfCache.cacheDataFrame("t2", df2)
     dfCache.cacheDataFrame("t3", df3)
-    dfCache.dataFrames.get("t2") should be (Some(df2))
+    dfCache.dataFrames.get("t2") should be(Some(df2))
 
     // uncache
     dfCache.uncacheDataFrame("t2")
-    dfCache.dataFrames.get("t2") should be (None)
-    dfCache.trashDataFrames.toList should be (df2 :: Nil)
+    dfCache.dataFrames.get("t2") should be(None)
+    dfCache.trashDataFrames.toList should be(df2 :: Nil)
 
     // uncache all
     dfCache.uncacheAllDataFrames()
-    dfCache.dataFrames.toMap should be (Map[String, DataFrame]())
-    dfCache.trashDataFrames.toList should be (df2 :: df1 :: df3 :: Nil)
+    dfCache.dataFrames.toMap should be(Map[String, DataFrame]())
+    dfCache.trashDataFrames.toList should be(df2 :: df1 :: df3 :: Nil)
 
     // clear all trash
     dfCache.clearAllTrashDataFrames()
-    dfCache.trashDataFrames.toList should be (Nil)
+    dfCache.trashDataFrames.toList should be(Nil)
   }
 
-
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/context/MetricWrapperTest.scala b/measure/src/test/scala/org/apache/griffin/measure/context/MetricWrapperTest.scala
index 9068a97..7d25cb1 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/context/MetricWrapperTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/context/MetricWrapperTest.scala
@@ -23,21 +23,27 @@
 
   "metric wrapper" should "flush empty if no metric inserted" in {
     val metricWrapper = MetricWrapper("name", "appId")
-    metricWrapper.flush should be (Map[Long, Map[String, Any]]())
+    metricWrapper.flush should be(Map[Long, Map[String, Any]]())
   }
 
   it should "flush all metrics inserted" in {
     val metricWrapper = MetricWrapper("test", "appId")
-    metricWrapper.insertMetric(1, Map("total" -> 10, "miss"-> 2))
+    metricWrapper.insertMetric(1, Map("total" -> 10, "miss" -> 2))
     metricWrapper.insertMetric(1, Map("match" -> 8))
     metricWrapper.insertMetric(2, Map("total" -> 20))
     metricWrapper.insertMetric(2, Map("miss" -> 4))
-    metricWrapper.flush should be (Map(
-      1L -> Map("name" -> "test", "tmst" -> 1, "value" -> Map("total" -> 10, "miss"-> 2, "match" -> 8),
-        "metadata" -> Map("applicationId" -> "appId")),
-      2L -> Map("name" -> "test", "tmst" -> 2, "value" -> Map("total" -> 20, "miss"-> 4),
-        "metadata" -> Map("applicationId" -> "appId"))
-    ))
+    metricWrapper.flush should be(
+      Map(
+        1L -> Map(
+          "name" -> "test",
+          "tmst" -> 1,
+          "value" -> Map("total" -> 10, "miss" -> 2, "match" -> 8),
+          "metadata" -> Map("applicationId" -> "appId")),
+        2L -> Map(
+          "name" -> "test",
+          "tmst" -> 2,
+          "value" -> Map("total" -> 20, "miss" -> 4),
+          "metadata" -> Map("applicationId" -> "appId"))))
   }
 
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/context/TimeRangeTest.scala b/measure/src/test/scala/org/apache/griffin/measure/context/TimeRangeTest.scala
index 895f658..28c195b 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/context/TimeRangeTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/context/TimeRangeTest.scala
@@ -24,17 +24,17 @@
   "time range" should "be able to merge another time range" in {
     val tr1 = TimeRange(1, 10, Set(2, 5, 8))
     val tr2 = TimeRange(4, 15, Set(5, 6, 13, 7))
-    tr1.merge(tr2) should be (TimeRange(1, 15, Set(2, 5, 6, 7, 8, 13)))
+    tr1.merge(tr2) should be(TimeRange(1, 15, Set(2, 5, 6, 7, 8, 13)))
   }
 
   it should "get minimum timestamp in not-empty timestamp set" in {
     val tr = TimeRange(1, 10, Set(2, 5, 8))
-    tr.minTmstOpt should be (Some(2))
+    tr.minTmstOpt should be(Some(2))
   }
 
   it should "not get minimum timestamp in empty timestamp set" in {
     val tr = TimeRange(1, 10, Set[Long]())
-    tr.minTmstOpt should be (None)
+    tr.minTmstOpt should be(None)
   }
 
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/datasource/TimestampStorageTest.scala b/measure/src/test/scala/org/apache/griffin/measure/datasource/TimestampStorageTest.scala
index 7dabbad..a90768e 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/datasource/TimestampStorageTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/datasource/TimestampStorageTest.scala
@@ -24,51 +24,51 @@
   "timestamp storage" should "be able to insert a timestamp" in {
     val timestampStorage = TimestampStorage()
     timestampStorage.insert(1L)
-    timestampStorage.all should be (Set(1L))
+    timestampStorage.all should be(Set(1L))
   }
 
   it should "be able to insert timestamps" in {
     val timestampStorage = TimestampStorage()
     timestampStorage.insert(Seq(1L, 2L, 3L))
-    timestampStorage.all should be (Set(1L, 2L, 3L))
+    timestampStorage.all should be(Set(1L, 2L, 3L))
   }
 
   it should "be able to remove a timestamp" in {
     val timestampStorage = TimestampStorage()
     timestampStorage.insert(Seq(1L, 2L, 3L))
     timestampStorage.remove(1L)
-    timestampStorage.all should be (Set(2L, 3L))
+    timestampStorage.all should be(Set(2L, 3L))
   }
 
   it should "be able to remove timestamps" in {
     val timestampStorage = TimestampStorage()
     timestampStorage.insert(Seq(1L, 2L, 3L))
     timestampStorage.remove(Seq(1L, 2L))
-    timestampStorage.all should be (Set(3L))
+    timestampStorage.all should be(Set(3L))
   }
 
   it should "be able to get timestamps in range [a, b)" in {
     val timestampStorage = TimestampStorage()
     timestampStorage.insert(Seq(6L, 2L, 3L, 4L, 8L))
-    timestampStorage.fromUntil(2L, 6L) should be (Set(2L, 3L, 4L))
+    timestampStorage.fromUntil(2L, 6L) should be(Set(2L, 3L, 4L))
   }
 
   it should "be able to get timestamps in range (a, b]" in {
     val timestampStorage = TimestampStorage()
     timestampStorage.insert(Seq(6L, 2L, 3L, 4L, 8L))
-    timestampStorage.afterTil(2L, 6L) should be (Set(3L, 4L, 6L))
+    timestampStorage.afterTil(2L, 6L) should be(Set(3L, 4L, 6L))
   }
 
   it should "be able to get timestamps smaller than b" in {
     val timestampStorage = TimestampStorage()
     timestampStorage.insert(Seq(6L, 2L, 3L, 4L, 8L))
-    timestampStorage.until(8L) should be (Set(2L, 3L, 4L, 6L))
+    timestampStorage.until(8L) should be(Set(2L, 3L, 4L, 6L))
   }
 
   it should "be able to get timestamps bigger than or equal a" in {
     val timestampStorage = TimestampStorage()
     timestampStorage.insert(Seq(6L, 2L, 3L, 4L, 8L))
-    timestampStorage.from(4L) should be (Set(4L, 6L, 8L))
+    timestampStorage.from(4L) should be(Set(4L, 6L, 8L))
   }
 
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/datasource/connector/DataConnectorFactorySpec.scala b/measure/src/test/scala/org/apache/griffin/measure/datasource/connector/DataConnectorFactorySpec.scala
index 192a799..2537931 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/datasource/connector/DataConnectorFactorySpec.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/datasource/connector/DataConnectorFactorySpec.scala
@@ -21,32 +21,39 @@
 
 import org.apache.spark.rdd.RDD
 import org.apache.spark.sql.{DataFrame, SparkSession}
-import org.apache.spark.streaming.dstream.InputDStream
 import org.apache.spark.streaming.StreamingContext
+import org.apache.spark.streaming.dstream.InputDStream
 import org.scalatest.FlatSpec
 
 import org.apache.griffin.measure.configuration.dqdefinition.DataConnectorParam
 import org.apache.griffin.measure.context.TimeRange
 import org.apache.griffin.measure.datasource.TimestampStorage
 import org.apache.griffin.measure.datasource.cache.StreamingCacheClient
-import org.apache.griffin.measure.datasource.connector.DataConnectorFactory
-import org.apache.griffin.measure.datasource.connector.batch.{BatchDataConnector, MySqlDataConnector}
-import org.apache.griffin.measure.datasource.connector.streaming.{KafkaStreamingStringDataConnector, StreamingDataConnector}
+import org.apache.griffin.measure.datasource.connector.batch.{
+  BatchDataConnector,
+  MySqlDataConnector
+}
+import org.apache.griffin.measure.datasource.connector.streaming.{
+  KafkaStreamingStringDataConnector,
+  StreamingDataConnector
+}
 
-case class ExampleBatchDataConnector(@transient sparkSession: SparkSession,
-                                     dcParam: DataConnectorParam,
-                                     timestampStorage: TimestampStorage) extends BatchDataConnector {
+case class ExampleBatchDataConnector(
+    @transient sparkSession: SparkSession,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage)
+    extends BatchDataConnector {
 
   override def data(ms: Long): (Option[DataFrame], TimeRange) = (None, TimeRange(ms))
 }
 
-
-case class ExampleStreamingDataConnector(@transient sparkSession: SparkSession,
-                                         @transient ssc: StreamingContext,
-                                         dcParam: DataConnectorParam,
-                                         timestampStorage: TimestampStorage,
-                                         streamingCacheClientOpt: Option[StreamingCacheClient]
-                                        ) extends StreamingDataConnector {
+case class ExampleStreamingDataConnector(
+    @transient sparkSession: SparkSession,
+    @transient ssc: StreamingContext,
+    dcParam: DataConnectorParam,
+    timestampStorage: TimestampStorage,
+    streamingCacheClientOpt: Option[StreamingCacheClient])
+    extends StreamingDataConnector {
   override type K = Unit
   override type V = Unit
   override type OUT = Unit
@@ -60,7 +67,6 @@
 
 class NotDataConnector
 
-
 class DataConnectorWithoutApply extends BatchDataConnector {
   override val sparkSession: SparkSession = null
   override val dcParam: DataConnectorParam = null
@@ -69,16 +75,17 @@
   override def data(ms: Long): (Option[DataFrame], TimeRange) = null
 }
 
-
 class DataConnectorFactorySpec extends FlatSpec {
 
   "DataConnectorFactory" should "be able to create custom batch connector" in {
     val param = DataConnectorParam(
-      "CUSTOM", null, null,
-      Map("class" -> classOf[ExampleBatchDataConnector].getCanonicalName), Nil)
+      "CUSTOM",
+      null,
+      null,
+      Map("class" -> classOf[ExampleBatchDataConnector].getCanonicalName),
+      Nil)
     // apparently Scalamock can not mock classes without empty-paren constructor, providing nulls
-    val res = DataConnectorFactory.getDataConnector(
-      null, null, param, null, None)
+    val res = DataConnectorFactory.getDataConnector(null, null, param, null, None)
     assert(res.get != null)
     assert(res.isSuccess)
     assert(res.get.isInstanceOf[ExampleBatchDataConnector])
@@ -87,53 +94,63 @@
 
   it should "be able to create MySqlDataConnector" in {
     val param = DataConnectorParam(
-      "CUSTOM", null, null,
-      Map("class" -> classOf[MySqlDataConnector].getCanonicalName), Nil)
+      "CUSTOM",
+      null,
+      null,
+      Map("class" -> classOf[MySqlDataConnector].getCanonicalName),
+      Nil)
     // apparently Scalamock can not mock classes without empty-paren constructor, providing nulls
-    val res = DataConnectorFactory.getDataConnector(
-      null, null, param, null, None)
+    val res = DataConnectorFactory.getDataConnector(null, null, param, null, None)
     assert(res.isSuccess)
     assert(res.get.isInstanceOf[MySqlDataConnector])
   }
 
   it should "be able to create KafkaStreamingStringDataConnector" in {
     val param = DataConnectorParam(
-      "CUSTOM", null, null,
-      Map("class" -> classOf[KafkaStreamingStringDataConnector].getCanonicalName), Nil)
-    val res = DataConnectorFactory.getDataConnector(
-      null, null, param, null, None)
+      "CUSTOM",
+      null,
+      null,
+      Map("class" -> classOf[KafkaStreamingStringDataConnector].getCanonicalName),
+      Nil)
+    val res = DataConnectorFactory.getDataConnector(null, null, param, null, None)
     assert(res.isSuccess)
     assert(res.get.isInstanceOf[KafkaStreamingStringDataConnector])
   }
 
   it should "fail if class is not extending DataConnectors" in {
     val param = DataConnectorParam(
-      "CUSTOM", null, null,
-      Map("class" -> classOf[NotDataConnector].getCanonicalName), Nil)
+      "CUSTOM",
+      null,
+      null,
+      Map("class" -> classOf[NotDataConnector].getCanonicalName),
+      Nil)
     // apparently Scalamock can not mock classes without empty-paren constructor, providing nulls
-    val res = DataConnectorFactory.getDataConnector(
-      null, null, param, null, None)
+    val res = DataConnectorFactory.getDataConnector(null, null, param, null, None)
     assert(res.isFailure)
     assert(res.failed.get.isInstanceOf[ClassCastException])
-    assert(res.failed.get.getMessage ==
-      "org.apache.griffin.measure.datasource.connector.NotDataConnector" +
-        " should extend BatchDataConnector or StreamingDataConnector")
+    assert(
+      res.failed.get.getMessage ==
+        "org.apache.griffin.measure.datasource.connector.NotDataConnector" +
+          " should extend BatchDataConnector or StreamingDataConnector")
   }
 
   it should "fail if class does not have apply() method" in {
     val param = DataConnectorParam(
-      "CUSTOM", null, null,
-      Map("class" -> classOf[DataConnectorWithoutApply].getCanonicalName), Nil)
+      "CUSTOM",
+      null,
+      null,
+      Map("class" -> classOf[DataConnectorWithoutApply].getCanonicalName),
+      Nil)
     // apparently Scalamock can not mock classes without empty-paren constructor, providing nulls
-    val res = DataConnectorFactory.getDataConnector(
-      null, null, param, null, None)
+    val res = DataConnectorFactory.getDataConnector(null, null, param, null, None)
     assert(res.isFailure)
     assert(res.failed.get.isInstanceOf[NoSuchMethodException])
-    assert(res.failed.get.getMessage ==
-      "org.apache.griffin.measure.datasource.connector.DataConnectorWithoutApply.apply" +
-        "(org.apache.spark.sql.SparkSession, " +
-        "org.apache.griffin.measure.configuration.dqdefinition.DataConnectorParam, " +
-        "org.apache.griffin.measure.datasource.TimestampStorage)")
+    assert(
+      res.failed.get.getMessage ==
+        "org.apache.griffin.measure.datasource.connector.DataConnectorWithoutApply.apply" +
+          "(org.apache.spark.sql.SparkSession, " +
+          "org.apache.griffin.measure.configuration.dqdefinition.DataConnectorParam, " +
+          "org.apache.griffin.measure.datasource.TimestampStorage)")
   }
 
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/datasource/connector/batch/FileBasedDataConnectorTest.scala b/measure/src/test/scala/org/apache/griffin/measure/datasource/connector/batch/FileBasedDataConnectorTest.scala
index 66a30c1..4cf2028 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/datasource/connector/batch/FileBasedDataConnectorTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/datasource/connector/batch/FileBasedDataConnectorTest.scala
@@ -54,7 +54,8 @@
     df.unpersist()
   }
 
-  private final val dcParam = DataConnectorParam("file", "1", "test_df", Map.empty[String, String], Nil)
+  private final val dcParam =
+    DataConnectorParam("file", "1", "test_df", Map.empty[String, String], Nil)
   private final val timestampStorage = TimestampStorage()
 
   // Regarding Local FileSystem
@@ -62,13 +63,8 @@
   "file based data connector" should "be able to read from local filesystem" in {
     val configs = Map(
       "format" -> "csv",
-      "paths" -> Seq(
-        s"file://${getClass.getResource("/hive/person_table.csv").getPath}"
-      ),
-      "options" -> Map(
-        "header" -> "false"
-      )
-    )
+      "paths" -> Seq(s"file://${getClass.getResource("/hive/person_table.csv").getPath}"),
+      "options" -> Map("header" -> "false"))
 
     val dc = FileBasedDataConnector(spark, dcParam.copy(config = configs), timestampStorage)
     val result = dc.data(1000L)
@@ -82,27 +78,30 @@
   it should "respect the provided schema, if any" in {
     val configs = Map(
       "format" -> "csv",
-      "paths" -> Seq(
-        s"file://${getClass.getResource("/hive/person_table.csv").getPath}"
-      )
-    )
+      "paths" -> Seq(s"file://${getClass.getResource("/hive/person_table.csv").getPath}"))
 
     // no schema
     assertThrows[IllegalArgumentException](
-      FileBasedDataConnector(spark, dcParam.copy(config = configs), timestampStorage)
-    )
+      FileBasedDataConnector(spark, dcParam.copy(config = configs), timestampStorage))
 
     // invalid schema
     assertThrows[IllegalStateException](
-      FileBasedDataConnector(spark, dcParam.copy(config = configs + (("schema", ""))), timestampStorage)
-    )
+      FileBasedDataConnector(
+        spark,
+        dcParam.copy(config = configs + (("schema", ""))),
+        timestampStorage))
 
     // valid schema
-    val result1 = FileBasedDataConnector(spark,
-      dcParam.copy(config = configs + (("schema",
-        Seq(Map("name" -> "name", "type" -> "string"), Map("name" -> "age", "type" -> "int", "nullable" -> "true"))
-      )))
-      , timestampStorage)
+    val result1 = FileBasedDataConnector(
+      spark,
+      dcParam.copy(
+        config = configs + (
+          (
+            "schema",
+            Seq(
+              Map("name" -> "name", "type" -> "string"),
+              Map("name" -> "age", "type" -> "int", "nullable" -> "true"))))),
+      timestampStorage)
       .data(1L)
 
     val expSchema = new StructType()
@@ -115,17 +114,18 @@
     assert(result1._1.get.schema == expSchema)
 
     // valid headers
-    val result2 = FileBasedDataConnector(spark,
-      dcParam.copy(config = configs + (("options", Map(
-        "header" -> "true"
-      )
-      )))
-      , timestampStorage)
+    val result2 = FileBasedDataConnector(
+      spark,
+      dcParam.copy(config = configs + (("options", Map("header" -> "true")))),
+      timestampStorage)
       .data(1L)
 
     assert(result2._1.isDefined)
     assert(result2._1.get.collect().length == 1)
-    result2._1.get.columns should contain theSameElementsAs Seq("Joey", "14", ConstantColumns.tmst)
+    result2._1.get.columns should contain theSameElementsAs Seq(
+      "Joey",
+      "14",
+      ConstantColumns.tmst)
   }
 
   // skip on erroneous paths
@@ -135,29 +135,28 @@
       "format" -> "csv",
       "paths" -> Seq(
         s"file://${getClass.getResource("/hive/person_table.csv").getPath}",
-        s"${java.util.UUID.randomUUID().toString}/"
-      ),
+        s"${java.util.UUID.randomUUID().toString}/"),
       "skipErrorPaths" -> true,
-      "options" -> Map(
-        "header" -> "true"
-      )
-    )
+      "options" -> Map("header" -> "true"))
 
     // valid paths
-    val result1 = FileBasedDataConnector(spark, dcParam.copy(config = configs), timestampStorage).data(1L)
+    val result1 =
+      FileBasedDataConnector(spark, dcParam.copy(config = configs), timestampStorage).data(1L)
 
     assert(result1._1.isDefined)
     assert(result1._1.get.collect().length == 1)
 
     // non existent path
     assertThrows[IllegalArgumentException](
-      FileBasedDataConnector(spark, dcParam.copy(config = configs - "skipErrorPaths"), timestampStorage).data(1L)
-    )
+      FileBasedDataConnector(
+        spark,
+        dcParam.copy(config = configs - "skipErrorPaths"),
+        timestampStorage).data(1L))
 
     // no path
     assertThrows[AssertionError](
-      FileBasedDataConnector(spark, dcParam.copy(config = configs - "paths"), timestampStorage).data(1L)
-    )
+      FileBasedDataConnector(spark, dcParam.copy(config = configs - "paths"), timestampStorage)
+        .data(1L))
   }
 
   // Regarding various formats
@@ -167,16 +166,11 @@
     formats.map(f => {
       val configs = Map(
         "format" -> f,
-        "paths" -> Seq(
-          s"file://${getClass.getResource(s"/files/person_table.$f").getPath}"
-        ),
-        "options" -> Map(
-          "header" -> "true",
-          "inferSchema" -> "true"
-        )
-      )
+        "paths" -> Seq(s"file://${getClass.getResource(s"/files/person_table.$f").getPath}"),
+        "options" -> Map("header" -> "true", "inferSchema" -> "true"))
 
-      val result = FileBasedDataConnector(spark, dcParam.copy(config = configs), timestampStorage).data(1L)
+      val result =
+        FileBasedDataConnector(spark, dcParam.copy(config = configs), timestampStorage).data(1L)
 
       assert(result._1.isDefined)
 
@@ -196,16 +190,12 @@
     formats.map(f => {
       val configs = Map(
         "format" -> f,
-        "paths" -> Seq(
-          s"file://${getClass.getResource(s"/files/person_table.$f").getPath}"
-        ),
-        "options" -> Map(
-          "header" -> "true"
-        ),
-        "schema" -> Seq(Map("name" -> "name", "type" -> "string"))
-      )
+        "paths" -> Seq(s"file://${getClass.getResource(s"/files/person_table.$f").getPath}"),
+        "options" -> Map("header" -> "true"),
+        "schema" -> Seq(Map("name" -> "name", "type" -> "string")))
 
-      val result = FileBasedDataConnector(spark, dcParam.copy(config = configs), timestampStorage).data(1L)
+      val result =
+        FileBasedDataConnector(spark, dcParam.copy(config = configs), timestampStorage).data(1L)
 
       assert(result._1.isDefined)
 
diff --git a/measure/src/test/scala/org/apache/griffin/measure/job/BatchDQAppTest.scala b/measure/src/test/scala/org/apache/griffin/measure/job/BatchDQAppTest.scala
index fd473c8..633974c 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/job/BatchDQAppTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/job/BatchDQAppTest.scala
@@ -43,7 +43,7 @@
       spark.conf.set("spark.app.name", "BatchDQApp Test")
       spark.conf.set("spark.sql.crossJoin.enabled", "true")
 
-      val logLevel = getGriffinLogLevel()
+      val logLevel = getGriffinLogLevel
       sc.setLogLevel(sparkParam.getLogLevel)
       griffinLogger.setLevel(logLevel)
 
@@ -71,7 +71,8 @@
 
   "accuracy batch job" should "work" in {
     dqApp = initApp("/_accuracy-batch-griffindsl.json")
-    val expectedMetrics = Map("total_count" -> 50,
+    val expectedMetrics = Map(
+      "total_count" -> 50,
       "miss_count" -> 4,
       "matched_count" -> 46,
       "matchedFraction" -> 0.92)
@@ -81,9 +82,7 @@
 
   "completeness batch job" should "work" in {
     dqApp = initApp("/_completeness-batch-griffindsl.json")
-    val expectedMetrics = Map("total" -> 50,
-      "incomplete" -> 1,
-      "complete" -> 49)
+    val expectedMetrics = Map("total" -> 50, "incomplete" -> 1, "complete" -> 49)
 
     runAndCheckResult(expectedMetrics)
   }
@@ -91,9 +90,8 @@
   "distinctness batch job" should "work" in {
     dqApp = initApp("/_distinctness-batch-griffindsl.json")
 
-    val expectedMetrics = Map("total" -> 50,
-      "distinct" -> 49,
-      "dup" -> Seq(Map("dup" -> 1, "num" -> 1)))
+    val expectedMetrics =
+      Map("total" -> 50, "distinct" -> 49, "dup" -> Seq(Map("dup" -> 1, "num" -> 1)))
 
     runAndCheckResult(expectedMetrics)
   }
@@ -101,7 +99,8 @@
   "profiling batch job" should "work" in {
     dqApp = initApp("/_profiling-batch-griffindsl.json")
     val expectedMetrics = Map(
-      "prof" -> Seq(Map("user_id" -> 10004, "cnt" -> 1),
+      "prof" -> Seq(
+        Map("user_id" -> 10004, "cnt" -> 1),
         Map("user_id" -> 10011, "cnt" -> 1),
         Map("user_id" -> 10010, "cnt" -> 1),
         Map("user_id" -> 10002, "cnt" -> 1),
@@ -113,25 +112,23 @@
         Map("user_id" -> 10003, "cnt" -> 1),
         Map("user_id" -> 10007, "cnt" -> 1),
         Map("user_id" -> 10012, "cnt" -> 1),
-        Map("user_id" -> 10009, "cnt" -> 1)
-      ),
-      "post_group" -> Seq(Map("post_code" -> "94022", "cnt" -> 13))
-    )
+        Map("user_id" -> 10009, "cnt" -> 1)),
+      "post_group" -> Seq(Map("post_code" -> "94022", "cnt" -> 13)))
 
     runAndCheckResult(expectedMetrics)
   }
 
   "timeliness batch job" should "work" in {
     dqApp = initApp("/_timeliness-batch-griffindsl.json")
-    val expectedMetrics = Map("total" -> 10,
+    val expectedMetrics = Map(
+      "total" -> 10,
       "avg" -> 276000,
       "percentile_95" -> 660000,
-      "step" -> Seq(Map("step" -> 0, "cnt" -> 6),
+      "step" -> Seq(
+        Map("step" -> 0, "cnt" -> 6),
         Map("step" -> 5, "cnt" -> 2),
         Map("step" -> 3, "cnt" -> 1),
-        Map("step" -> 4, "cnt" -> 1)
-      )
-    )
+        Map("step" -> 4, "cnt" -> 1)))
 
     runAndCheckResult(expectedMetrics)
   }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/job/DQAppTest.scala b/measure/src/test/scala/org/apache/griffin/measure/job/DQAppTest.scala
index 952c591..a05ade6 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/job/DQAppTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/job/DQAppTest.scala
@@ -17,14 +17,12 @@
 
 package org.apache.griffin.measure.job
 
-import scala.util.Failure
-import scala.util.Success
-import org.scalatest.BeforeAndAfterAll
-import org.scalatest.FlatSpec
-import org.scalatest.Matchers
+import scala.util.{Failure, Success}
+
+import org.scalatest.{BeforeAndAfterAll, FlatSpec, Matchers}
+
+import org.apache.griffin.measure.{Loggable, SparkSuiteBase}
 import org.apache.griffin.measure.Application._
-import org.apache.griffin.measure.Loggable
-import org.apache.griffin.measure.SparkSuiteBase
 import org.apache.griffin.measure.configuration.dqdefinition._
 import org.apache.griffin.measure.configuration.enums.ProcessType
 import org.apache.griffin.measure.configuration.enums.ProcessType._
@@ -32,7 +30,12 @@
 import org.apache.griffin.measure.launch.batch.BatchDQApp
 import org.apache.griffin.measure.launch.streaming.StreamingDQApp
 
-class DQAppTest extends FlatSpec with SparkSuiteBase with BeforeAndAfterAll with Matchers with Loggable {
+class DQAppTest
+    extends FlatSpec
+    with SparkSuiteBase
+    with BeforeAndAfterAll
+    with Matchers
+    with Loggable {
 
   var envParam: EnvConfig = _
   var sparkParam: SparkParam = _
@@ -59,7 +62,7 @@
       case BatchProcessType => BatchDQApp(allParam)
       case StreamingProcessType => StreamingDQApp(allParam)
       case _ =>
-        error(s"${procType} is unsupported process type!")
+        error(s"$procType is unsupported process type!")
         sys.exit(-4)
     }
 
diff --git a/measure/src/test/scala/org/apache/griffin/measure/sink/CustomSink.scala b/measure/src/test/scala/org/apache/griffin/measure/sink/CustomSink.scala
index 5ac197a..f95f349 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/sink/CustomSink.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/sink/CustomSink.scala
@@ -18,21 +18,24 @@
 package org.apache.griffin.measure.sink
 
 import scala.collection.mutable
+import scala.collection.mutable.ListBuffer
 
 import org.apache.spark.rdd.RDD
 
 /**
-  * sink records and metrics in memory for test.
-  *
-  * @param config sink configurations
-  * @param metricName
-  * @param timeStamp
-  * @param block
-  */
-case class CustomSink(config: Map[String, Any],
-                      metricName: String,
-                      timeStamp: Long,
-                      block: Boolean) extends Sink {
+ * sink records and metrics in memory for test.
+ *
+ * @param config sink configurations
+ * @param metricName
+ * @param timeStamp
+ * @param block
+ */
+case class CustomSink(
+    config: Map[String, Any],
+    metricName: String,
+    timeStamp: Long,
+    block: Boolean)
+    extends Sink {
   def available(): Boolean = true
 
   def start(msg: String): Unit = {}
@@ -41,7 +44,7 @@
 
   def log(rt: Long, msg: String): Unit = {}
 
-  val allRecords = mutable.ListBuffer[String]()
+  val allRecords: ListBuffer[String] = mutable.ListBuffer[String]()
 
   def sinkRecords(records: RDD[String], name: String): Unit = {
     allRecords ++= records.collect()
@@ -51,7 +54,7 @@
     allRecords ++= records
   }
 
-  val allMetrics = mutable.Map[String, Any]()
+  val allMetrics: mutable.Map[String, Any] = mutable.Map[String, Any]()
 
   def sinkMetrics(metrics: Map[String, Any]): Unit = {
     allMetrics ++= metrics
diff --git a/measure/src/test/scala/org/apache/griffin/measure/sink/CustomSinkTest.scala b/measure/src/test/scala/org/apache/griffin/measure/sink/CustomSinkTest.scala
index 33886f8..8675be9 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/sink/CustomSinkTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/sink/CustomSinkTest.scala
@@ -25,19 +25,19 @@
 
 class CustomSinkTest extends SinkTestBase {
 
-  var sinkParam = SinkParam("custom",
-    Map("class" -> "org.apache.griffin.measure.sink.CustomSink"))
-  var sinkParams = Seq(sinkParam)
+  val sinkParam: SinkParam =
+    SinkParam("custom", Map("class" -> "org.apache.griffin.measure.sink.CustomSink"))
+  override var sinkParams = Seq(sinkParam)
 
-  def withCustomSink[A](func: (MultiSinks) => A): A = {
+  def withCustomSink[A](func: MultiSinks => A): A = {
     val sinkFactory = SinkFactory(sinkParams, "Test Sink Factory")
     val timestamp = System.currentTimeMillis
-    val sinks = sinkFactory.getSinks(timestamp, true)
+    val sinks = sinkFactory.getSinks(timestamp, block = true)
     func(sinks)
   }
 
   "custom sink" can "sink metrics" in {
-    val actualMetrics = withCustomSink((sinks) => {
+    val actualMetrics = withCustomSink(sinks => {
       sinks.sinkMetrics(Map("sum" -> 10))
       sinks.sinkMetrics(Map("count" -> 5))
       sinks.headSinkOpt match {
@@ -51,7 +51,7 @@
   }
 
   "custom sink" can "sink records" in {
-    val actualRecords = withCustomSink((sinks) => {
+    val actualRecords = withCustomSink(sinks => {
       val rdd1 = createDataFrame(1 to 2)
       sinks.sinkRecords(rdd1.toJSON.rdd, "test records")
       val rdd2 = createDataFrame(2 to 4)
@@ -72,6 +72,9 @@
     actualRecords should be(expected)
   }
 
+  val metricsDefaultOutput: RuleOutputParam =
+    RuleOutputParam("metrics", "default_output", "default")
+
   "RecordWriteStep" should "work with custom sink" in {
     val resultTable = "result_table"
     val df = createDataFrame(1 to 5)
@@ -81,7 +84,7 @@
     val dQContext = getDqContext()
     RecordWriteStep(rwName, resultTable).execute(dQContext)
 
-    val actualRecords = dQContext.getSink().asInstanceOf[MultiSinks].headSinkOpt match {
+    val actualRecords = dQContext.getSink.asInstanceOf[MultiSinks].headSinkOpt match {
       case Some(sink: CustomSink) => sink.allRecords
       case _ => mutable.ListBuffer[String]()
     }
@@ -96,10 +99,10 @@
     actualRecords should be(expected)
   }
 
-  val metricsDefaultOutput = RuleOutputParam("metrics", "default_output", "default")
-  val metricsEntriesOutput = RuleOutputParam("metrics", "entries_output", "entries")
-  val metricsArrayOutput = RuleOutputParam("metrics", "array_output", "array")
-  val metricsMapOutput = RuleOutputParam("metrics", "map_output", "map")
+  val metricsEntriesOutput: RuleOutputParam =
+    RuleOutputParam("metrics", "entries_output", "entries")
+  val metricsArrayOutput: RuleOutputParam = RuleOutputParam("metrics", "array_output", "array")
+  val metricsMapOutput: RuleOutputParam = RuleOutputParam("metrics", "map_output", "map")
 
   "MetricWriteStep" should "output default metrics with custom sink" in {
     val resultTable = "result_table"
@@ -119,17 +122,18 @@
 
     metricWriteStep.execute(dQContext)
     MetricFlushStep().execute(dQContext)
-    val actualMetrics = dQContext.getSink().asInstanceOf[MultiSinks].headSinkOpt match {
+    val actualMetrics = dQContext.getSink.asInstanceOf[MultiSinks].headSinkOpt match {
       case Some(sink: CustomSink) => sink.allMetrics
       case _ => mutable.Map[String, Any]()
     }
 
-    val metricsValue = Seq(Map("sex" -> "man", "max(age)" -> 19, "avg(age)" -> 18.0),
+    val metricsValue = Seq(
+      Map("sex" -> "man", "max(age)" -> 19, "avg(age)" -> 18.0),
       Map("sex" -> "women", "max(age)" -> 20, "avg(age)" -> 18.0))
 
     val expected = Map("default_output" -> metricsValue)
 
-    actualMetrics.get("value").get should be(expected)
+    actualMetrics("value") should be(expected)
   }
 
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/sink/SinkTestBase.scala b/measure/src/test/scala/org/apache/griffin/measure/sink/SinkTestBase.scala
index 5380b4c..919183b 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/sink/SinkTestBase.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/sink/SinkTestBase.scala
@@ -17,40 +17,30 @@
 
 package org.apache.griffin.measure.sink
 
-import org.apache.spark.sql.DataFrame
-import org.apache.spark.sql.Row
+import org.apache.spark.sql.{DataFrame, Row}
 import org.apache.spark.sql.types._
-import org.scalatest.FlatSpec
-import org.scalatest.Matchers
+import org.scalatest.{FlatSpec, Matchers}
 
-import org.apache.griffin.measure.Loggable
+import org.apache.griffin.measure.{Loggable, SparkSuiteBase}
 import org.apache.griffin.measure.configuration.dqdefinition.SinkParam
 import org.apache.griffin.measure.configuration.enums.ProcessType.BatchProcessType
 import org.apache.griffin.measure.context.{ContextId, DQContext}
-import org.apache.griffin.measure.SparkSuiteBase
 
 trait SinkTestBase extends FlatSpec with Matchers with SparkSuiteBase with Loggable {
 
   var sinkParams: Seq[SinkParam]
 
   def getDqContext(name: String = "test-context"): DQContext = {
-    DQContext(
-      ContextId(System.currentTimeMillis),
-      name,
-      Nil,
-      sinkParams,
-      BatchProcessType
-    )(spark)
+    DQContext(ContextId(System.currentTimeMillis), name, Nil, sinkParams, BatchProcessType)(spark)
   }
 
-
   def createDataFrame(arr: Seq[Int]): DataFrame = {
-    val schema = StructType(Array(
-      StructField("id", LongType),
-      StructField("name", StringType),
-      StructField("sex", StringType),
-      StructField("age", IntegerType)
-    ))
+    val schema = StructType(
+      Array(
+        StructField("id", LongType),
+        StructField("name", StringType),
+        StructField("sex", StringType),
+        StructField("age", IntegerType)))
     val rows = arr.map { i =>
       Row(i.toLong, s"name_$i", if (i % 2 == 0) "man" else "women", i + 15)
     }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/step/TransformStepTest.scala b/measure/src/test/scala/org/apache/griffin/measure/step/TransformStepTest.scala
index c3a9330..5a227a0 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/step/TransformStepTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/step/TransformStepTest.scala
@@ -19,21 +19,20 @@
 
 import org.scalatest._
 
-import org.apache.griffin.measure.Loggable
+import org.apache.griffin.measure.{Loggable, SparkSuiteBase}
 import org.apache.griffin.measure.configuration.enums.ProcessType.BatchProcessType
-import org.apache.griffin.measure.context.ContextId
-import org.apache.griffin.measure.context.DQContext
-import org.apache.griffin.measure.SparkSuiteBase
+import org.apache.griffin.measure.context.{ContextId, DQContext}
 import org.apache.griffin.measure.step.transform.TransformStep
 
 class TransformStepTest extends FlatSpec with Matchers with SparkSuiteBase with Loggable {
 
-  case class DualTransformStep(name: String,
-                               duration: Int,
-                               rule: String = "",
-                               details: Map[String, Any] = Map(),
-                               cache: Boolean = false
-                              ) extends TransformStep {
+  case class DualTransformStep(
+      name: String,
+      duration: Int,
+      rule: String = "",
+      details: Map[String, Any] = Map(),
+      cache: Boolean = false)
+      extends TransformStep {
 
     def doExecute(context: DQContext): Boolean = {
       val threadName = Thread.currentThread().getName
@@ -45,32 +44,26 @@
   }
 
   private def getDqContext(name: String = "test-context"): DQContext = {
-    DQContext(
-      ContextId(System.currentTimeMillis),
-      name,
-      Nil,
-      Nil,
-      BatchProcessType
-    )(spark)
+    DQContext(ContextId(System.currentTimeMillis), name, Nil, Nil, BatchProcessType)(spark)
   }
 
   /**
-    * Run transform steps in parallel. Here are the dependencies of transform steps
-    *
-    * step5
-    * |   |---step2
-    * |   |   |---step1
-    * |   |---step3
-    * |   |   |---step1
-    * |   |---step4
-    *
-    * step1 : -->
-    * step2 :    --->
-    * step3 :    ---->
-    * step4 : ->
-    * step5 :         -->
-    *
-    */
+   * Run transform steps in parallel. Here are the dependencies of transform steps
+   *
+   * step5
+   * |   |---step2
+   * |   |   |---step1
+   * |   |---step3
+   * |   |   |---step1
+   * |   |---step4
+   *
+   * step1 : -->
+   * step2 :    --->
+   * step3 :    ---->
+   * step4 : ->
+   * step5 :         -->
+   *
+   */
   "transform step " should "be run steps in parallel" in {
     val step1 = DualTransformStep("step1", 3)
     val step2 = DualTransformStep("step2", 4)
@@ -84,6 +77,6 @@
     step5.parentSteps += step4
 
     val context = getDqContext()
-    step5.execute(context) should be (true)
+    step5.execute(context) should be(true)
   }
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/step/builder/dsl/transform/CompletenessExpr2DQStepsTest.scala b/measure/src/test/scala/org/apache/griffin/measure/step/builder/dsl/transform/CompletenessExpr2DQStepsTest.scala
index c17360b..67e5236 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/step/builder/dsl/transform/CompletenessExpr2DQStepsTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/step/builder/dsl/transform/CompletenessExpr2DQStepsTest.scala
@@ -30,20 +30,20 @@
   "CompletenessExpr2DQSteps" should "get correct where clause" in {
     val completeness = CompletenessExpr2DQSteps(mock[DQContext], mock[Expr], mock[RuleParam])
 
-    val regexClause = completeness.getEachErrorWhereClause(
-      RuleErrorConfParam("id", "regex", List(raw"\d+")))
-    regexClause shouldBe (raw"(`id` REGEXP '\\d+')")
+    val regexClause =
+      completeness.getEachErrorWhereClause(RuleErrorConfParam("id", "regex", List(raw"\d+")))
+    regexClause shouldBe raw"(`id` REGEXP '\\d+')"
 
     val enumerationClause = completeness.getEachErrorWhereClause(
       RuleErrorConfParam("id", "enumeration", List("1", "2", "3")))
-    enumerationClause shouldBe ("(`id` IN ('1', '2', '3'))")
+    enumerationClause shouldBe "(`id` IN ('1', '2', '3'))"
 
     val noneClause = completeness.getEachErrorWhereClause(
       RuleErrorConfParam("id", "enumeration", List("hive_none")))
-    noneClause shouldBe ("(`id` IS NULL)")
+    noneClause shouldBe "(`id` IS NULL)"
 
     val fullClause = completeness.getEachErrorWhereClause(
       RuleErrorConfParam("id", "enumeration", List("1", "hive_none", "3", "foo,bar")))
-    fullClause shouldBe ("(`id` IN ('1', '3', 'foo,bar') OR `id` IS NULL)")
+    fullClause shouldBe "(`id` IN ('1', '3', 'foo,bar') OR `id` IS NULL)"
   }
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/transformations/AccuracyTransformationsIntegrationTest.scala b/measure/src/test/scala/org/apache/griffin/measure/transformations/AccuracyTransformationsIntegrationTest.scala
index 2bf5119..ddfc95b 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/transformations/AccuracyTransformationsIntegrationTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/transformations/AccuracyTransformationsIntegrationTest.scala
@@ -19,12 +19,13 @@
 
 import org.apache.spark.sql.DataFrame
 import org.scalatest._
+
+import org.apache.griffin.measure.SparkSuiteBase
 import org.apache.griffin.measure.configuration.dqdefinition._
 import org.apache.griffin.measure.configuration.enums.ProcessType.BatchProcessType
 import org.apache.griffin.measure.context.{ContextId, DQContext}
 import org.apache.griffin.measure.datasource.DataSourceFactory
 import org.apache.griffin.measure.job.builder.DQJobBuilder
-import org.apache.griffin.measure.SparkSuiteBase
 
 case class AccuracyResult(total: Long, miss: Long, matched: Long, matchedFraction: Double)
 
@@ -58,8 +59,7 @@
     checkAccuracy(
       sourceName = PERSON_TABLE,
       targetName = EMPTY_PERSON_TABLE,
-      expectedResult = AccuracyResult(total = 2, miss = 2, matched = 0, matchedFraction = 0.0)
-    )
+      expectedResult = AccuracyResult(total = 2, miss = 2, matched = 0, matchedFraction = 0.0))
   }
 
   "accuracy" should "work with empty source" in {
@@ -76,25 +76,24 @@
       expectedResult = AccuracyResult(total = 0, miss = 0, matched = 0, matchedFraction = 1.0))
   }
 
-  private def checkAccuracy(sourceName: String, targetName: String, expectedResult: AccuracyResult) = {
+  private def checkAccuracy(
+      sourceName: String,
+      targetName: String,
+      expectedResult: AccuracyResult) = {
     val dqContext: DQContext = getDqContext(
       dataSourcesParam = List(
         DataSourceParam(
           name = "source",
-          connectors = List(dataConnectorParam(tableName = sourceName))
-        ),
+          connectors = List(dataConnectorParam(tableName = sourceName))),
         DataSourceParam(
           name = "target",
-          connectors = List(dataConnectorParam(tableName = targetName))
-        )
-      ))
+          connectors = List(dataConnectorParam(tableName = targetName)))))
 
     val accuracyRule = RuleParam(
       dslType = "griffin-dsl",
       dqType = "ACCURACY",
       outDfName = "person_accuracy",
-      rule = "source.name = target.name"
-    )
+      rule = "source.name = target.name")
 
     val spark = this.spark
     import spark.implicits._
@@ -108,10 +107,8 @@
   }
 
   private def getRuleResults(dqContext: DQContext, rule: RuleParam): DataFrame = {
-    val dqJob = DQJobBuilder.buildDQJob(
-      dqContext,
-      evaluateRuleParam = EvaluateRuleParam(List(rule))
-    )
+    val dqJob =
+      DQJobBuilder.buildDQJob(dqContext, evaluateRuleParam = EvaluateRuleParam(List(rule)))
 
     dqJob.execute(dqContext)
 
@@ -122,48 +119,43 @@
     val personCsvPath = getClass.getResource("/hive/person_table.csv").getFile
 
     spark.sql(
-      s"CREATE TABLE ${PERSON_TABLE} " +
+      s"CREATE TABLE $PERSON_TABLE " +
         "( " +
         "  name String," +
         "  age int " +
         ") " +
         "ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' " +
-        "STORED AS TEXTFILE"
-    )
+        "STORED AS TEXTFILE")
 
-    spark.sql(s"LOAD DATA LOCAL INPATH '$personCsvPath' OVERWRITE INTO TABLE ${PERSON_TABLE}")
+    spark.sql(s"LOAD DATA LOCAL INPATH '$personCsvPath' OVERWRITE INTO TABLE $PERSON_TABLE")
   }
 
   private def createEmptyPersonTable(): Unit = {
     spark.sql(
-      s"CREATE TABLE ${EMPTY_PERSON_TABLE} " +
+      s"CREATE TABLE $EMPTY_PERSON_TABLE " +
         "( " +
         "  name String," +
         "  age int " +
         ") " +
         "ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' " +
-        "STORED AS TEXTFILE"
-    )
+        "STORED AS TEXTFILE")
 
-    spark.sql(s"select * from ${EMPTY_PERSON_TABLE}").show()
+    spark.sql(s"select * from $EMPTY_PERSON_TABLE").show()
   }
 
   private def dropTables(): Unit = {
-    spark.sql(s"DROP TABLE IF EXISTS ${PERSON_TABLE} ")
-    spark.sql(s"DROP TABLE IF EXISTS ${EMPTY_PERSON_TABLE} ")
+    spark.sql(s"DROP TABLE IF EXISTS $PERSON_TABLE ")
+    spark.sql(s"DROP TABLE IF EXISTS $EMPTY_PERSON_TABLE ")
   }
 
-  private def getDqContext(dataSourcesParam: Seq[DataSourceParam], name: String = "test-context"): DQContext = {
+  private def getDqContext(
+      dataSourcesParam: Seq[DataSourceParam],
+      name: String = "test-context"): DQContext = {
     val dataSources = DataSourceFactory.getDataSources(spark, null, dataSourcesParam)
     dataSources.foreach(_.init())
 
-    DQContext(
-      ContextId(System.currentTimeMillis),
-      name,
-      dataSources,
-      Nil,
-      BatchProcessType
-    )(spark)
+    DQContext(ContextId(System.currentTimeMillis), name, dataSources, Nil, BatchProcessType)(
+      spark)
   }
 
   private def dataConnectorParam(tableName: String) = {
@@ -172,7 +164,6 @@
       version = null,
       dataFrameName = null,
       config = Map("table.name" -> tableName),
-      preProc = null
-    )
+      preProc = null)
   }
 }
diff --git a/measure/src/test/scala/org/apache/griffin/measure/utils/ParamUtilTest.scala b/measure/src/test/scala/org/apache/griffin/measure/utils/ParamUtilTest.scala
index 5fb9610..720f9b2 100644
--- a/measure/src/test/scala/org/apache/griffin/measure/utils/ParamUtilTest.scala
+++ b/measure/src/test/scala/org/apache/griffin/measure/utils/ParamUtilTest.scala
@@ -23,11 +23,12 @@
 
 class ParamUtilTest extends FlatSpec with Matchers with BeforeAndAfter {
 
-  val fruits = Map[String, Any]("A" -> "apple", "B" -> "banana", "O" -> "orange")
-  val numbers = Map[String, Any]("a" -> 1, "b" -> 5, "c" -> 3)
-  val ids = Seq[Any](2, 3, 5, 7)
-  val cities = Seq[Any]("LA", "NY", "SLC")
-  val percentiles = Seq[Any](.95, "0.4", ".3", 1, "static", "0.2")
+  val fruits: Map[String, Any] =
+    Map[String, Any]("A" -> "apple", "B" -> "banana", "O" -> "orange")
+  val numbers: Map[String, Any] = Map[String, Any]("a" -> 1, "b" -> 5, "c" -> 3)
+  val ids: Seq[Any] = Seq[Any](2, 3, 5, 7)
+  val cities: Seq[Any] = Seq[Any]("LA", "NY", "SLC")
+  val percentiles: Seq[Any] = Seq[Any](.95, "0.4", ".3", 1, "static", "0.2")
   var params: Map[String, Any] = _
 
   before {
@@ -38,64 +39,64 @@
       "numbers" -> numbers,
       "ids" -> ids,
       "cities" -> cities,
-      "percentiles" -> percentiles
-    )
+      "percentiles" -> percentiles)
   }
 
   "TransUtil" should "transform all basic data types" in {
     import ParamUtil.TransUtil._
-    toAny("test") should be (Some("test"))
-    toAnyRef[Seq[_]]("test") should be (None)
-    toAnyRef[Seq[_]](Seq(1, 2)) should be (Some(Seq(1, 2)))
-    toStringOpt("test") should be (Some("test"))
-    toStringOpt(123) should be (Some("123"))
-    toByte(12) should be (Some(12))
-    toByte(123456) should not be (Some(123456))
-    toShort(12) should be (Some(12))
-    toShort(123456) should not be (Some(123456))
-    toInt(12) should be (Some(12))
-    toInt(1.8) should be (Some(1))
-    toInt("123456") should be (Some(123456))
-    toLong(123456) should be (Some(123456L))
-    toFloat(1.2) should be (Some(1.2f))
-    toDouble("1.21") should be (Some(1.21))
-    toBoolean(true) should be (Some(true))
-    toBoolean("false") should be (Some(false))
-    toBoolean("test") should be (None)
+    toAny("test") should be(Some("test"))
+    toAnyRef[Seq[_]]("test") should be(None)
+    toAnyRef[Seq[_]](Seq(1, 2)) should be(Some(Seq(1, 2)))
+    toStringOpt("test") should be(Some("test"))
+    toStringOpt(123) should be(Some("123"))
+    toByte(12) should be(Some(12))
+    toByte(123456) should not be Some(123456)
+    toShort(12) should be(Some(12))
+    toShort(123456) should not be Some(123456)
+    toInt(12) should be(Some(12))
+    toInt(1.8) should be(Some(1))
+    toInt("123456") should be(Some(123456))
+    toLong(123456) should be(Some(123456L))
+    toFloat(1.2) should be(Some(1.2f))
+    toDouble("1.21") should be(Some(1.21))
+    toBoolean(true) should be(Some(true))
+    toBoolean("false") should be(Some(false))
+    toBoolean("test") should be(None)
   }
 
   "params" should "extract string any map field" in {
-    params.getParamAnyMap("fruits") should be (fruits)
-    params.getParamAnyMap("numbers") should be (numbers)
-    params.getParamAnyMap("name") should be (Map.empty[String, Any])
+    params.getParamAnyMap("fruits") should be(fruits)
+    params.getParamAnyMap("numbers") should be(numbers)
+    params.getParamAnyMap("name") should be(Map.empty[String, Any])
   }
 
   "params" should "extract string string map field" in {
-    params.getParamStringMap("fruits") should be (fruits)
-    params.getParamStringMap("numbers") should be (Map[String, String]("a" -> "1", "b" -> "5", "c" -> "3"))
-    params.getParamStringMap("name") should be (Map.empty[String, String])
+    params.getParamStringMap("fruits") should be(fruits)
+    params.getParamStringMap("numbers") should be(
+      Map[String, String]("a" -> "1", "b" -> "5", "c" -> "3"))
+    params.getParamStringMap("name") should be(Map.empty[String, String])
   }
 
   "params" should "extract array field" in {
-    params.getStringArr("ids") should be (Seq("2", "3", "5", "7"))
-    params.getStringArr("cities") should be (cities)
-    params.getStringArr("name") should be (Nil)
+    params.getStringArr("ids") should be(Seq("2", "3", "5", "7"))
+    params.getStringArr("cities") should be(cities)
+    params.getStringArr("name") should be(Nil)
   }
 
   "params" should "get double array" in {
-    params.getDoubleArr("percentiles") should be (Seq(0.95, 0.4, 0.3, 1, 0.2))
+    params.getDoubleArr("percentiles") should be(Seq(0.95, 0.4, 0.3, 1, 0.2))
   }
 
   "map" should "add if not exist" in {
     val map = Map[String, Any]("a" -> 1, "b" -> 2)
-    map.addIfNotExist("a", 11) should be (Map[String, Any]("a" -> 1, "b" -> 2))
-    map.addIfNotExist("c", 11) should be (Map[String, Any]("a" -> 1, "b" -> 2, "c" -> 11))
+    map.addIfNotExist("a", 11) should be(Map[String, Any]("a" -> 1, "b" -> 2))
+    map.addIfNotExist("c", 11) should be(Map[String, Any]("a" -> 1, "b" -> 2, "c" -> 11))
   }
 
   "map" should "remove keys" in {
     val map = Map[String, Any]("a" -> 1, "b" -> 2)
-    map.removeKeys(Seq("c", "d")) should be (Map[String, Any]("a" -> 1, "b" -> 2))
-    map.removeKeys(Seq("a")) should be (Map[String, Any]("b" -> 2))
+    map.removeKeys(Seq("c", "d")) should be(Map[String, Any]("a" -> 1, "b" -> 2))
+    map.removeKeys(Seq("a")) should be(Map[String, Any]("b" -> 2))
   }
 
 }
diff --git a/pom.xml b/pom.xml
index 25574ce..7a68625 100644
--- a/pom.xml
+++ b/pom.xml
@@ -152,30 +152,6 @@
                     </executions>
                 </plugin>
                 <plugin>
-                    <groupId>org.scalastyle</groupId>
-                    <artifactId>scalastyle-maven-plugin</artifactId>
-                    <version>1.0.0</version>
-                    <configuration>
-                        <verbose>false</verbose>
-                        <failOnViolation>true</failOnViolation>
-                        <includeTestSourceDirectory>false</includeTestSourceDirectory>
-                        <failOnWarning>false</failOnWarning>
-                        <sourceDirectory>${basedir}/src/main/scala</sourceDirectory>
-                        <testSourceDirectory>${basedir}/src/test/scala</testSourceDirectory>
-                        <configLocation>scalastyle-config.xml</configLocation>
-                        <outputFile>${basedir}/target/scalastyle-output.xml</outputFile>
-                        <inputEncoding>${project.build.sourceEncoding}</inputEncoding>
-                        <outputEncoding>${project.reporting.outputEncoding}</outputEncoding>
-                    </configuration>
-                    <executions>
-                        <execution>
-                            <goals>
-                                <goal>check</goal>
-                            </goals>
-                        </execution>
-                    </executions>
-                </plugin>
-                <plugin>
                     <groupId>org.apache.rat</groupId>
                     <artifactId>apache-rat-plugin</artifactId>
                     <version>${maven-apache-rat.version}</version>