[CARBONDATA-3836] Fix metadata folder FileNotFoundException while creating new carbon table

Why is this PR needed?
1. In the case of using carbon only setting carbon.storelocation, carbon will use local spark warehouse path instead of the configured value.
2. FileNotFoundException is thrown when creating schema file for a brand new table. Because current implementation gets the schema file path by listing the Metadata directory which has not been created.

What changes were proposed in this PR?
1. spark.sql.warehouse.dir has its own default value in Spark, remove using carbonStorePath as default value, which will make hiveStorePath.equals(carbonStorePath) TRUE when user not set spark.sql.warehouse.dir.
2. create the Metadata directory before getting the schema file path.

Does this PR introduce any user interface change?
No

Is any new testcase added?
No

This closes #3780
diff --git a/integration/spark/src/main/scala/org/apache/spark/sql/CarbonEnv.scala b/integration/spark/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
index 9df5809..5062a43 100644
--- a/integration/spark/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
+++ b/integration/spark/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
@@ -328,7 +328,7 @@
     if ((!EnvHelper.isLegacy(sparkSession)) &&
         (dbName.equals("default") || databaseLocation.endsWith(".db"))) {
       val carbonStorePath = CarbonProperties.getStorePath()
-      val hiveStorePath = sparkSession.conf.get("spark.sql.warehouse.dir", carbonStorePath)
+      val hiveStorePath = sparkSession.conf.get("spark.sql.warehouse.dir")
       // if carbon.store does not point to spark.sql.warehouse.dir then follow the old table path
       // format
       if (carbonStorePath != null && !hiveStorePath.equals(carbonStorePath)) {
diff --git a/integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonFileMetastore.scala b/integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonFileMetastore.scala
index 5156798..4c5f16d 100644
--- a/integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonFileMetastore.scala
+++ b/integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonFileMetastore.scala
@@ -402,8 +402,7 @@
   private def createSchemaThriftFile(
       identifier: AbsoluteTableIdentifier,
       thriftTableInfo: TableInfo): String = {
-    val schemaFilePath = CarbonTablePath.getSchemaFilePath(identifier.getTablePath)
-    val schemaMetadataPath = CarbonTablePath.getFolderContainingFile(schemaFilePath)
+    val schemaMetadataPath = CarbonTablePath.getMetadataPath(identifier.getTablePath)
     if (!FileFactory.isFileExist(schemaMetadataPath)) {
       val isDirCreated = FileFactory
         .mkdirs(schemaMetadataPath, SparkSession.getActiveSession.get.sessionState.newHadoopConf())
@@ -411,6 +410,7 @@
         throw new IOException(s"Failed to create the metadata directory $schemaMetadataPath")
       }
     }
+    val schemaFilePath = CarbonTablePath.getSchemaFilePath(identifier.getTablePath)
     val thriftWriter = new ThriftWriter(schemaFilePath, false)
     thriftWriter.open(FileWriteOperation.OVERWRITE)
     thriftWriter.write(thriftTableInfo)