[CARBONDATA-3490] Fix concurrent data load failure with carbondata FileNotFound exception

problem: When two load is happening concurrently, one load is cleaning the temp directory of the concurrent load

cause: temp directory to store the carbon files is created using system.get nano time, due to this two load have same store location. when one load is completed, it cleaned the temp directory. causing dataload failure for other load.

solution:
use UUID instead of nano time while creating the temp directory to have each load a unique directory.

This closes #3352
diff --git a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
index 7015279..8d6cdfb 100644
--- a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
+++ b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
@@ -21,6 +21,7 @@
 import java.math.BigDecimal
 import java.text.SimpleDateFormat
 import java.util
+import java.util.UUID
 import java.util.regex.{Matcher, Pattern}
 
 import scala.collection.JavaConverters._
@@ -777,8 +778,10 @@
     val isCarbonUseYarnLocalDir = CarbonProperties.getInstance().getProperty(
       CarbonCommonConstants.CARBON_LOADING_USE_YARN_LOCAL_DIR,
       CarbonCommonConstants.CARBON_LOADING_USE_YARN_LOCAL_DIR_DEFAULT).equalsIgnoreCase("true")
-    val tmpLocationSuffix =
-      s"${File.separator}carbon${System.nanoTime()}${CarbonCommonConstants.UNDERSCORE}$index"
+    val tmpLocationSuffix = s"${ File.separator }carbon${
+      UUID.randomUUID().toString
+        .replace("-", "")
+    }${ CarbonCommonConstants.UNDERSCORE }$index"
     if (isCarbonUseYarnLocalDir) {
       val yarnStoreLocations = Util.getConfiguredLocalDirs(SparkEnv.get.conf)