Added documentation for new features in the DML, DDL Section and added content to troubleshooting.
diff --git a/docs/ddl-operation-on-carbondata.md b/docs/ddl-operation-on-carbondata.md
index d261963..ca2107e 100644
--- a/docs/ddl-operation-on-carbondata.md
+++ b/docs/ddl-operation-on-carbondata.md
@@ -27,18 +27,18 @@
 * [SHOW TABLE](#show-table)
 * [DROP TABLE](#drop-table)
 * [COMPACTION](#compaction)
+* [BUCKETING](#bucketing)
 
 ## CREATE TABLE
   This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.
-  
 ```
-   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
-                    [(col_name data_type , ...)]               
+   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+                    [(col_name data_type , ...)]
    STORED BY 'carbondata'
    [TBLPROPERTIES (property_name=property_value, ...)]
    // All Carbon's additional table options will go into properties
 ```
-   
+
 ### Parameter Description
 
 | Parameter | Description | Optional |
@@ -48,93 +48,86 @@
 | table_name | The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. | No |
 | STORED BY | "org.apache.carbondata.format", identifies and creates a CarbonData table. | No |
 | TBLPROPERTIES | List of CarbonData table properties. |  |
- 
- 
+
 ### Usage Guidelines
-            
+
    Following are the guidelines for using table properties.
-     
+
    - **Dictionary Encoding Configuration**
-   
+
        Dictionary encoding is enabled by default for all String columns, and disabled for non-String columns. You can include and exclude columns for dictionary encoding.
-     
 ```
-       TBLPROPERTIES ("DICTIONARY_EXCLUDE"="column1, column2") 
-       TBLPROPERTIES ("DICTIONARY_INCLUDE"="column1, column2") 
+       TBLPROPERTIES ("DICTIONARY_EXCLUDE"="column1, column2")
+       TBLPROPERTIES ("DICTIONARY_INCLUDE"="column1, column2")
 ```
-       
+
    Here, DICTIONARY_EXCLUDE will exclude dictionary creation. This is applicable for high-cardinality columns and is an optional parameter. DICTIONARY_INCLUDE will generate dictionary for the columns specified in the list.
-     
+
    - **Row/Column Format Configuration**
-     
+
        Column groups with more than one column are stored in row format, instead of columnar format. By default, each column is a separate column group.
-     
 ```
-TBLPROPERTIES ("COLUMN_GROUPS"="(column1,column3),
-(Column4,Column5,Column6)") 
+TBLPROPERTIES ("COLUMN_GROUPS"="(column1, column3),
+(Column4,Column5,Column6)")
 ```
-   
+
    - **Table Block Size Configuration**
-   
+
      The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.
-     If you do not specify this value in the DDL command , default value is used. 
-     
+     If you do not specify this value in the DDL command, default value is used.
 ```
        TBLPROPERTIES ("TABLE_BLOCKSIZE"="512 MB")
 ```
-     
+
   Here 512 MB means the block size of this table is 512 MB, you can also set it as 512M or 512.
-   
+
    - **Inverted Index Configuration**
-     
+
       Inverted index is very useful to improve compression ratio and query speed, especially for those low-cardinality columns who are in reward position.
       By default inverted index is enabled. The user can disable the inverted index creation for some columns.
-     
 ```
-       TBLPROPERTIES ("NO_INVERTED_INDEX"="column1,column3")
+       TBLPROPERTIES ("NO_INVERTED_INDEX"="column1, column3")
 ```
 
   No inverted index shall be generated for the columns specified in NO_INVERTED_INDEX. This property is applicable on columns with high-cardinality and is an optional parameter.
 
    NOTE:
-     
-   - By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures. 
-    
+
+   - By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures.
+
    - All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.
-     
-     
+
 ### Example:
 ```
    CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
                                 productNumber Int,
-                                productName String, 
-                                storeCity String, 
-                                storeProvince String, 
-                                productCategory String, 
+                                productName String,
+                                storeCity String,
+                                storeProvince String,
+                                productCategory String,
                                 productBatch String,
                                 saleQuantity Int,
-                                revenue Int)       
-   STORED BY 'carbondata' 
+                                revenue Int)
+   STORED BY 'carbondata'
    TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productCategory)',
                   'DICTIONARY_EXCLUDE'='productName',
                   'DICTIONARY_INCLUDE'='productNumber',
                   'NO_INVERTED_INDEX'='productBatch')
 ```
-    
+
 ## SHOW TABLE
 
   This command can be used to list all the tables in current database or all the tables of a specific database.
 ```
   SHOW TABLES [IN db_Name];
 ```
-  
+
 ### Parameter Description
 | Parameter  | Description                                                                               | Optional |
 |------------|-------------------------------------------------------------------------------------------|----------|
 | IN db_Name | Name of the database. Required only if tables of this specific database are to be listed. | Yes      |
 
 ### Example:
-  
 ```
   SHOW TABLES IN ProductSchema;
 ```
@@ -142,7 +135,6 @@
 ## DROP TABLE
 
  This command is used to delete an existing table.
-
 ```
   DROP TABLE [IF EXISTS] [db_name.]table_name;
 ```
@@ -154,7 +146,6 @@
 | table_name | Name of the table to be deleted. | NO |
 
 ### Example:
-
 ```
   DROP TABLE IF EXISTS productSchema.productSalesTable;
 ```
@@ -162,13 +153,12 @@
 ## COMPACTION
 
 This command merges the specified number of segments into one segment. This enhances the query performance of the table.
-
 ```
   ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR';
 ```
-  
+
   To get details about Compaction refer to [Data Management](data-management.md)
-  
+
 ### Parameter Description
 
 | Parameter | Description | Optional |
@@ -179,15 +169,64 @@
 ### Syntax
 
 - **Minor Compaction**
-
 ```
 ALTER TABLE table_name COMPACT 'MINOR';
 ```
 - **Major Compaction**
-
 ```
 ALTER TABLE table_name COMPACT 'MAJOR';
 ```
 
-  
-  
+## BUCKETING
+
+Bucketing feature can be used to distribute/organize the table/partition data into multiple files such
+that similar records are present in the same file. While creating a table, a user needs to specify the
+columns to be used for bucketing and the number of buckets. For the selction of bucket the Hash value
+of columns is used.
+```
+   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+                    [(col_name data_type, ...)]
+   STORED BY 'carbondata'
+   TBLPROPERTIES(“BUCKETNUMBER”=”noOfBuckets”,
+   “BUCKETCOLUMNS”=’’columnname”, “TABLENAME”=”tablename”)
+
+```
+
+## Parameter Description
+
+| Parameter 	| Description 	| Optional 	|
+|---------------	|------------------------------------------------------------------------------------------------------------------------------	|----------	|
+| BUCKETNUMBER 	| Specifies the number of Buckets to be created. 	| No 	|
+| BUCKETCOLUMNS 	| Specify the columns to be considered for Bucketing  	| No 	|
+| TABLENAME 	| The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. 	| Yes 	|
+
+## Usage Guidelines
+
+- The feature is supported for Spark 1.6.2 onwards, but the performance optimization is evident from Spark 2.1 onwards.
+
+- Bucketing can not be performed for columns of Complex Data Types.
+
+- Columns in the BUCKETCOLUMN parameter must be either a dimension or a measure but combination of both is not supported.
+
+## Example :
+```
+ CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                productNumber Int,
+                                productName String,
+                                storeCity String,
+                                storeProvince String,
+                                productCategory String,
+                                productBatch String,
+                                saleQuantity Int,
+                                revenue Int)
+   STORED BY 'carbondata'
+   TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productCategory)',
+                  'DICTIONARY_EXCLUDE'='productName',
+                  'DICTIONARY_INCLUDE'='productNumber',
+                  'NO_INVERTED_INDEX'='productBatch',
+                  'BUCKETNUMBER'='4',
+                  'BUCKETCOLUMNS'='productNumber,saleQuantity',
+                  'TABLENAME'='productSalesTable')
+
+  ```
+
diff --git a/docs/dml-operation-on-carbondata.md b/docs/dml-operation-on-carbondata.md
index 0523d95..74fa0b0 100644
--- a/docs/dml-operation-on-carbondata.md
+++ b/docs/dml-operation-on-carbondata.md
@@ -133,7 +133,27 @@
 
     NOTE: Date formats are specified by date pattern strings. The date pattern letters in CarbonData are same as in JAVA. Refer to [SimpleDateFormat](http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html).
 
+- **USE_KETTLE:** This option is used to specify whether to use kettle for loading data or not. By default kettle is not used for data loading.
 
+    ```
+    OPTIONS('USE_KETTLE'='FALSE')
+    ```
+
+   Note :  It is recommended to set the value for this option as false.
+
+- **SINGLE_PASS:** Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.
+
+   This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE.
+
+    ```
+    OPTIONS('SINGLE_PASS'='TRUE')
+    ```
+
+   Note :
+
+   * If this option is set to TRUE then data loading will take less time.
+
+   * If this option is set to some invalid value other than TRUE or FALSE then it uses the default value.
 ### Example:
 
 ```
@@ -142,9 +162,11 @@
 'FILEHEADER'='empno,empname,designation,doj,workgroupcategory,
  workgroupcategoryname,deptno,deptname,projectcode,
  projectjoindate,projectenddate,attendance,utilization,salary',
-'MULTILINE'='true','ESCAPECHAR'='\','COMPLEX_DELIMITER_LEVEL_1'='$', 
+'MULTILINE'='true','ESCAPECHAR'='\','COMPLEX_DELIMITER_LEVEL_1'='$',
 'COMPLEX_DELIMITER_LEVEL_2'=':',
-'ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary'
+'ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary',
+'USE_KETTLE'='FALSE',
+'SINGLE_PASS'='TRUE'
 )
 ```
 
diff --git a/docs/installation-guide.md b/docs/installation-guide.md
index 7c1f7eb..d8f1b5e 100644
--- a/docs/installation-guide.md
+++ b/docs/installation-guide.md
@@ -53,14 +53,13 @@
     NOTE: carbonplugins will contain .kettle folder.
     
 * In Spark node, configure the properties mentioned in the following table in ``"<SPARK_HOME>/conf/spark-defaults.conf"`` file.
-  
-| Property | Description | Value |
-|--------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------|
-| carbon.kettle.home | Path that will be used by CarbonData internally to create graph for loading the data | $SPARK_HOME /carbonlib/carbonplugins |
-| spark.driver.extraJavaOptions | A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. | -Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties |
-| spark.executor.extraJavaOptions | A string of extra JVM options to pass to executors. For instance, GC settings or other logging. NOTE: You can enter multiple values separated by space. | -Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties |
 
-  
+| Property | Value | Description |
+|---------------------------------|-----------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
+| carbon.kettle.home | $SPARK_HOME /carbonlib/carbonplugins | Path that will be used by CarbonData internally to create graph for loading the data |
+| spark.driver.extraJavaOptions | -Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties | A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. |
+| spark.executor.extraJavaOptions | -Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties | A string of extra JVM options to pass to executors. For instance, GC settings or other logging. NOTE: You can enter multiple values separated by space. |
+
 * Add the following properties in ``"<SPARK_HOME>/conf/" carbon.properties``:
 
 | Property             | Required | Description                                                                            | Example                             | Remark  |
@@ -78,7 +77,7 @@
 
 NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.
 
-To get started with CarbonData : [Quick Start](quick-start-guide.md) , [DDL Operations on CarbonData](ddl-operation-on-carbondata.md)
+To get started with CarbonData : [Quick Start](quick-start-guide.md), [DDL Operations on CarbonData](ddl-operation-on-carbondata.md)
 
 ## Installing and Configuring CarbonData on "Spark on YARN" Cluster
 
@@ -123,14 +122,14 @@
 
 
 * Verify the installation.
-   
+
 ```
-     ./bin/spark-shell --master yarn-client --driver-memory 1g 
+     ./bin/spark-shell --master yarn-client --driver-memory 1g
      --executor-cores 2 --executor-memory 2G
 ```
   NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.
 
-  Getting started with CarbonData : [Quick Start](quick-start-guide.md) , [DDL Operations on CarbonData](ddl-operation-on-carbondata.md)
+  Getting started with CarbonData : [Quick Start](quick-start-guide.md), [DDL Operations on CarbonData](ddl-operation-on-carbondata.md)
 
 ## Query Execution Using CarbonData Thrift Server
 
@@ -139,17 +138,17 @@
    a. cd ``<SPARK_HOME>``
 
    b. Run the following command to start the CarbonData thrift server.
-     
+
 ```
-./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true
 --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
 $SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR <carbon_store_path>
 ```
-  
+
 | Parameter | Description | Example |
 |---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|
 | CARBON_ASSEMBLY_JAR | CarbonData assembly jar name present in the ``"<SPARK_HOME>"/carbonlib/`` folder. | carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar |
-| carbon_store_path | This is a parameter to the CarbonThriftServer class. This a HDFS path where CarbonData files will be kept. Strongly Recommended to put same as carbon.storelocation parameter of carbon.properties. | hdfs//<host_name>:54310/user/hive/warehouse/carbon.store |
+| carbon_store_path | This is a parameter to the CarbonThriftServer class. This a HDFS path where CarbonData files will be kept. Strongly Recommended to put same as carbon.storelocation parameter of carbon.properties. | ``hdfs//<host_name>:54310/user/hive/warehouse/carbon.store`` |
 
 ### Examples
    
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 21d3db6..9181d83 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -18,13 +18,230 @@
 -->
 
 # Troubleshooting
-This tutorial is designed to provide troubleshooting for end users and developers 
+This tutorial is designed to provide troubleshooting for end users and developers
 who are building, deploying, and using CarbonData.
 
-### General Prevention and Best Practices
- * When trying to create a table with a single numeric column, table creation fails: 
-   One column that can be considered as dimension is mandatory for table creation.
-         
- * "Files locked for updation" when same table is accessed from two or more instances: 
-    Remove metastore_db from the examples folder.
+## Failed to load thrift libraries
 
+  **Symptom**
+
+  Thrift throws following exception :
+
+  ```
+  thrift: error while loading shared libraries:
+  libthriftc.so.0: cannot open shared object file: No such file or directory
+  ```
+
+  **Possible Cause**
+
+  The complete path to the directory containing the libraries is not configured correctly.
+
+  **Procedure**
+
+  Follow the Apache thrift docs at [https://thrift.apache.org/docs/install](https://thrift.apache.org/docs/install) to install thrift correctly.
+
+## Failed to launch the Spark Shell
+
+  **Symptom**
+
+  The shell prompts the following error :
+
+  ```
+  org.apache.spark.sql.CarbonContext$$anon$$apache$spark$sql$catalyst$analysis
+  $OverrideCatalog$_setter_$org$apache$spark$sql$catalyst$analysis
+  $OverrideCatalog$$overrides_$e
+  ```
+
+  **Possible Cause**
+
+  The Spark Version and the selected Spark Profile do not match.
+
+  **Procedure**
+
+  1. Ensure your spark version and selected profile for spark are correct.
+
+  2. Use the following command :
+
+    ```
+     "mvn -Pspark-2.1 -Dspark.version {yourSparkVersion} clean package"
+    ```
+
+    Note :  Refrain from using "mvn clean package" without specifying the profile.
+
+## Failed to execute load query on cluster.
+
+  **Symptom**
+
+  Load query failed with the following exception:
+
+  ```
+  Dictionary file is locked for updation.
+  ```
+
+  **Possible Cause**
+
+  The carbon.properties file is not identical in all the nodes of the cluster.
+
+  **Procedure**
+
+  Follow the steps to ensure the carbon.properties file is consistent across all the nodes:
+
+  1. Copy the carbon.properties file from the master node to all the other nodes in the cluster.
+     For example, you can use ssh to copy this file to all the nodes.
+
+  2. For the changes to take effect, restart the Spark cluster.
+
+## Failed to execute insert query on cluster.
+
+  **Symptom**
+
+  Load query failed with the following exception:
+
+  ```
+  Dictionary file is locked for updation.
+  ```
+
+  **Possible Cause**
+
+  The carbon.properties file is not identical in all the nodes of the cluster.
+
+  **Procedure**
+
+  Follow the steps to ensure the carbon.properties file is consistent across all the nodes:
+
+  1. Copy the carbon.properties file from the master node to all the other nodes in the cluster.
+       For example, you can use scp to copy this file to all the nodes.
+
+  2. For the changes to take effect, restart the Spark cluster.
+
+## Failed to connect to hiveuser with thrift
+
+  **Symptom**
+
+  We get the following exception :
+
+  ```
+  Cannot connect to hiveuser.
+  ```
+
+  **Possible Cause**
+
+  The external process does not have permission to access.
+
+  **Procedure**
+
+  Ensure that the Hiveuser in mysql must allow its access to the external processes.
+
+## Failure to read the metastore db during table creation.
+
+  **Symptom**
+
+  We get the following exception on trying to connect :
+
+  ```
+  Cannot read the metastore db
+  ```
+
+  **Possible Cause**
+
+  The metastore db is dysfunctional.
+
+  **Procedure**
+
+  Remove the metastore db from the carbon.metastore in the Spark Directory.
+
+## Failed to load data on the cluster
+
+  **Symptom**
+
+  Data loading fails with the following exception :
+
+   ```
+   Data Load failure exeception
+   ```
+
+  **Possible Cause**
+
+  The following issue can cause the failure :
+
+  1. The core-site.xml, hive-site.xml, yarn-site and carbon.properties are not consistent across all nodes of the cluster.
+
+  2. Path to hdfs ddl is not configured correctly in the carbon.properties.
+
+  **Procedure**
+
+   Follow the steps to ensure the following configuration files are consistent across all the nodes:
+
+   1. Copy the core-site.xml, hive-site.xml, yarn-site,carbon.properties files from the master node to all the other nodes in the cluster.
+      For example, you can use scp to copy this file to all the nodes.
+
+      Note : Set the path to hdfs ddl in carbon.properties in the master node.
+
+   2. For the changes to take effect, restart the Spark cluster.
+
+
+
+## Failed to insert data on the cluster
+
+  **Symptom**
+
+  Insertion fails with the following exception :
+
+   ```
+   Data Load failure exeception
+   ```
+
+  **Possible Cause**
+
+  The following issue can cause the failure :
+
+  1. The core-site.xml, hive-site.xml, yarn-site and carbon.properties are not consistent across all nodes of the cluster.
+
+  2. Path to hdfs ddl is not configured correctly in the carbon.properties.
+
+  **Procedure**
+
+   Follow the steps to ensure the following configuration files are consistent across all the nodes:
+
+   1. Copy the core-site.xml, hive-site.xml, yarn-site,carbon.properties files from the master node to all the other nodes in the cluster.
+      For example, you can use scp to copy this file to all the nodes.
+
+      Note : Set the path to hdfs ddl in carbon.properties in the master node.
+
+   2. For the changes to take effect, restart the Spark cluster.
+
+## Failed to execute Concurrent Operations(Load,Insert,Update) on table by multiple workers.
+
+  **Symptom**
+
+  Execution fails with the following exception :
+
+   ```
+   Table is locked for updation.
+   ```
+
+  **Possible Cause**
+
+  Concurrency not supported.
+
+  **Procedure**
+
+  Worker must wait for the query execution to complete and the table to release the lock for another query execution to succeed..
+
+## Failed to create a table with a single numeric column.
+
+  **Symptom**
+
+  Execution fails with the following exception :
+
+   ```
+   Table creation fails.
+   ```
+
+  **Possible Cause**
+
+  Behavior not supported.
+
+  **Procedure**
+
+  A single column that can be considered as dimension is mandatory for table creation.