Improve helix tutorial and code formatting (#1931) (#1932)

Improve helix tutorial and code formatting
diff --git a/helix-admin-webapp/src/main/java/org/apache/helix/webapp/resources/ConfigResource.java b/helix-admin-webapp/src/main/java/org/apache/helix/webapp/resources/ConfigResource.java
index a28ad09..a1148de 100644
--- a/helix-admin-webapp/src/main/java/org/apache/helix/webapp/resources/ConfigResource.java
+++ b/helix-admin-webapp/src/main/java/org/apache/helix/webapp/resources/ConfigResource.java
@@ -190,7 +190,8 @@
   /**
    * set or remove configs depends on "command" field of jsonParameters in POST body
    * @param entity
-   * @param scopeStr
+   * @param type
+   * @param scopeArgs
    * @throws Exception
    */
   void setConfigs(Representation entity, ConfigScopeProperty type, String scopeArgs)
diff --git a/helix-core/src/main/java/org/apache/helix/controller/GenericHelixController.java b/helix-core/src/main/java/org/apache/helix/controller/GenericHelixController.java
index 95416d4..b7e9fa0 100644
--- a/helix-core/src/main/java/org/apache/helix/controller/GenericHelixController.java
+++ b/helix-core/src/main/java/org/apache/helix/controller/GenericHelixController.java
@@ -119,8 +119,8 @@
 /**
  * Cluster Controllers main goal is to keep the cluster state as close as possible to Ideal State.
  * It does this by listening to changes in cluster state and scheduling new tasks to get cluster
- * state to best possible ideal state. Every instance of this class can control can control only one
- * cluster Get all the partitions use IdealState, CurrentState and Messages <br>
+ * state to the best possible ideal state. Every instance of this class can control only one cluster.
+ * Get all the partitions use IdealState, CurrentState and Messages <br>
  * foreach partition <br>
  * 1. get the (instance,state) from IdealState, CurrentState and PendingMessages <br>
  * 2. compute best possible state (instance,state) pair. This needs previous step data and state
diff --git a/helix-core/src/main/java/org/apache/helix/controller/rebalancer/topology/Topology.java b/helix-core/src/main/java/org/apache/helix/controller/rebalancer/topology/Topology.java
index 3bc2e3a..3d2a878 100644
--- a/helix-core/src/main/java/org/apache/helix/controller/rebalancer/topology/Topology.java
+++ b/helix-core/src/main/java/org/apache/helix/controller/rebalancer/topology/Topology.java
@@ -148,8 +148,8 @@
 
     List<Node> children = root.getChildren();
     if (children != null) {
-      for (int i = 0; i < children.size(); i++) {
-        Node newChild = cloneTree(children.get(i), newNodeWeight, failedNodes);
+      for (Node child : children) {
+        Node newChild = cloneTree(child, newNodeWeight, failedNodes);
         newChild.setParent(root);
         newRoot.addChild(newChild);
       }
@@ -166,7 +166,7 @@
     root.setType(Types.ROOT.name());
 
     // TODO: Currently we add disabled instance to the topology tree. Since they are not considered
-    // TODO: in relabalnce, maybe we should skip adding them to the tree for consistence.
+    // TODO: in rebalance, maybe we should skip adding them to the tree for consistence.
     for (String instanceName : _allInstances) {
       InstanceConfig insConfig = _instanceConfigMap.get(instanceName);
       try {
diff --git a/website/0.9.8/src/site/markdown/tutorial_task_framework.md b/website/0.9.8/src/site/markdown/tutorial_task_framework.md
index d348544..f6513e7 100644
--- a/website/0.9.8/src/site/markdown/tutorial_task_framework.md
+++ b/website/0.9.8/src/site/markdown/tutorial_task_framework.md
@@ -29,9 +29,9 @@
 ![Task Framework flow chart](./images/TaskFrameworkLayers.png)
 
 ### Key Concepts
-* Task is the basic unit in Helix task framework. It can represents the a single runnable logics that user prefer to execute for each partition (distributed units).
+* Task is the smallest unit of work in Helix Task Framework. It represents a single runnable logics that user prefer to execute for each partition (distributed units).
 * Job defines one time operation across all the partitions. It contains multiple Tasks and configuration of tasks, such as how many tasks, timeout per task and so on.
-* Workflow is directed acyclic graph represents the relationships and running orders of Jobs. In addition, a workflow can also provide customized configuration, for example, Job dependencies.
+* Workflow is a directed acyclic graph that represents the relationships and running orders of Jobs. In addition, a workflow can also provide customized configuration, for example, Job dependencies.
 * JobQueue is another type of Workflow. Different from normal one, JobQueue is not terminated until user kill it. Also JobQueue can keep accepting newly coming jobs.
 
 ### Implement Your Task
@@ -71,7 +71,7 @@
 
 #### Share Content Across Tasks and Jobs
 
-Task framework also provides a feature that user can store the key-value data per task, job and workflow. The content stored at workflow layer can shared by different jobs belong to this workflow. Similarly content persisted at job layer can shared by different tasks nested in this job. Currently, user can extend the abstract class [UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java) and use two methods putUserContent and getUserContent. It will similar to hash map put and get method except a Scope.  The Scope will define which layer this key-value pair to be persisted.
+Task framework also provides a feature that user can store the key-value data per task, job and workflow. The content stored at workflow layer can shared by different jobs belong to this workflow. Similarly, content persisted at job layer can shared by different tasks nested in this job. Currently, user can extend the abstract class [UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java) and use two methods putUserContent and getUserContent. It's similar to HashMap put and get method except for the additional param Scope.  The Scope defines which layer this key-value pair to be persisted.
 
 ```
 public class MyTask extends UserContentStore implements Task {
@@ -99,7 +99,7 @@
 
 #### Task Retry and Abort
 
-Helix provides retry logics to users. User can specify the how many times allowed to tolerant failure of tasks under a job. It is a method will be introduced in Following Job Section. Another choice offered to user that if user thinks a task is very critical and do not want to do the retry once it is failed, user can return a TaskResult stated above with FATAL_FAILED status. Then Helix will not do the retry for that task.
+Helix provides retry logics to users. User can specify the number of task failures to allow under a job. It is a method will be introduced in Following Job Section. Another choice offered to user that if user thinks a task is very critical and do not want to do the retry once it is failed, user can return a TaskResult stated above with FATAL_FAILED status. Then Helix will not do the retry for that task.
 
 ```
 return new TaskResult(TaskResult.Status.FATAL_FAILED, "DO NOT WANT TO RETRY, ERROR MESSAGE");
@@ -194,7 +194,7 @@
 
 #### Add a Job
 
-WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig built, no job can be added! For creating a Job, please refering following section (Create a Job)
+WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig is built, no job can be added! For creating a Job, please refer to the following section (Create a Job)
 
 ```
 myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
@@ -202,7 +202,7 @@
 
 #### Add a Job dependency
 
-Jobs can have dependencies. If one job2 depends job1, job2 will not be scheduled until job1 finished.
+Jobs can have dependencies. If one job2 depends on job1, job2 will not be scheduled until job1 finished.
 
 ```
 myWorkflowBuilder.addParentChildDependency(ParentJobName, ChildJobName);
@@ -224,7 +224,7 @@
 | _setExpiry(long v, TimeUnit unit)_ | Set the expiration time for this workflow. |
 | _setFailureThreshold(int failureThreshold)_ | Set the failure threshold for this workflow, once job failures reach this number, the workflow will be failed. |
 | _setWorkflowType(String workflowType)_ | Set the user defined workflowType for this workflow. |
-| _setTerminable(boolean isTerminable)_ | Set the whether this workflow is terminable or not. |
+| _setTerminable(boolean isTerminable)_ | Specify whether this workflow is terminable or not. |
 | _setCapacity(int capacity)_ | Set the number of jobs that workflow can hold before reject further jobs. Only used when workflow is not terminable. |
 | _setTargetState(TargetState v)_ | Set the final state of this workflow. |
 
@@ -255,7 +255,7 @@
 
 ####Delete Job from Queue
 
-Helix allowed user to delete a job from existing queue. We offers delete API in TaskDriver to do this. Delete job from queue and this queue has to be stopped. Then user can resume the job once delete success.
+Helix allowed user to delete a job from existing queue. We offer delete API in TaskDriver to do this. The queue has to be stopped in order for a job to be deleted. User can resume the queue once deletion succeeds.
 
 ```
 taskDriver.stop("QueueName");
diff --git a/website/0.9.9/src/site/markdown/tutorial_task_framework.md b/website/0.9.9/src/site/markdown/tutorial_task_framework.md
index d348544..f6513e7 100644
--- a/website/0.9.9/src/site/markdown/tutorial_task_framework.md
+++ b/website/0.9.9/src/site/markdown/tutorial_task_framework.md
@@ -29,9 +29,9 @@
 ![Task Framework flow chart](./images/TaskFrameworkLayers.png)
 
 ### Key Concepts
-* Task is the basic unit in Helix task framework. It can represents the a single runnable logics that user prefer to execute for each partition (distributed units).
+* Task is the smallest unit of work in Helix Task Framework. It represents a single runnable logics that user prefer to execute for each partition (distributed units).
 * Job defines one time operation across all the partitions. It contains multiple Tasks and configuration of tasks, such as how many tasks, timeout per task and so on.
-* Workflow is directed acyclic graph represents the relationships and running orders of Jobs. In addition, a workflow can also provide customized configuration, for example, Job dependencies.
+* Workflow is a directed acyclic graph that represents the relationships and running orders of Jobs. In addition, a workflow can also provide customized configuration, for example, Job dependencies.
 * JobQueue is another type of Workflow. Different from normal one, JobQueue is not terminated until user kill it. Also JobQueue can keep accepting newly coming jobs.
 
 ### Implement Your Task
@@ -71,7 +71,7 @@
 
 #### Share Content Across Tasks and Jobs
 
-Task framework also provides a feature that user can store the key-value data per task, job and workflow. The content stored at workflow layer can shared by different jobs belong to this workflow. Similarly content persisted at job layer can shared by different tasks nested in this job. Currently, user can extend the abstract class [UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java) and use two methods putUserContent and getUserContent. It will similar to hash map put and get method except a Scope.  The Scope will define which layer this key-value pair to be persisted.
+Task framework also provides a feature that user can store the key-value data per task, job and workflow. The content stored at workflow layer can shared by different jobs belong to this workflow. Similarly, content persisted at job layer can shared by different tasks nested in this job. Currently, user can extend the abstract class [UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java) and use two methods putUserContent and getUserContent. It's similar to HashMap put and get method except for the additional param Scope.  The Scope defines which layer this key-value pair to be persisted.
 
 ```
 public class MyTask extends UserContentStore implements Task {
@@ -99,7 +99,7 @@
 
 #### Task Retry and Abort
 
-Helix provides retry logics to users. User can specify the how many times allowed to tolerant failure of tasks under a job. It is a method will be introduced in Following Job Section. Another choice offered to user that if user thinks a task is very critical and do not want to do the retry once it is failed, user can return a TaskResult stated above with FATAL_FAILED status. Then Helix will not do the retry for that task.
+Helix provides retry logics to users. User can specify the number of task failures to allow under a job. It is a method will be introduced in Following Job Section. Another choice offered to user that if user thinks a task is very critical and do not want to do the retry once it is failed, user can return a TaskResult stated above with FATAL_FAILED status. Then Helix will not do the retry for that task.
 
 ```
 return new TaskResult(TaskResult.Status.FATAL_FAILED, "DO NOT WANT TO RETRY, ERROR MESSAGE");
@@ -194,7 +194,7 @@
 
 #### Add a Job
 
-WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig built, no job can be added! For creating a Job, please refering following section (Create a Job)
+WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig is built, no job can be added! For creating a Job, please refer to the following section (Create a Job)
 
 ```
 myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
@@ -202,7 +202,7 @@
 
 #### Add a Job dependency
 
-Jobs can have dependencies. If one job2 depends job1, job2 will not be scheduled until job1 finished.
+Jobs can have dependencies. If one job2 depends on job1, job2 will not be scheduled until job1 finished.
 
 ```
 myWorkflowBuilder.addParentChildDependency(ParentJobName, ChildJobName);
@@ -224,7 +224,7 @@
 | _setExpiry(long v, TimeUnit unit)_ | Set the expiration time for this workflow. |
 | _setFailureThreshold(int failureThreshold)_ | Set the failure threshold for this workflow, once job failures reach this number, the workflow will be failed. |
 | _setWorkflowType(String workflowType)_ | Set the user defined workflowType for this workflow. |
-| _setTerminable(boolean isTerminable)_ | Set the whether this workflow is terminable or not. |
+| _setTerminable(boolean isTerminable)_ | Specify whether this workflow is terminable or not. |
 | _setCapacity(int capacity)_ | Set the number of jobs that workflow can hold before reject further jobs. Only used when workflow is not terminable. |
 | _setTargetState(TargetState v)_ | Set the final state of this workflow. |
 
@@ -255,7 +255,7 @@
 
 ####Delete Job from Queue
 
-Helix allowed user to delete a job from existing queue. We offers delete API in TaskDriver to do this. Delete job from queue and this queue has to be stopped. Then user can resume the job once delete success.
+Helix allowed user to delete a job from existing queue. We offer delete API in TaskDriver to do this. The queue has to be stopped in order for a job to be deleted. User can resume the queue once deletion succeeds.
 
 ```
 taskDriver.stop("QueueName");
diff --git a/website/1.0.1/src/site/markdown/tutorial_task_framework.md b/website/1.0.1/src/site/markdown/tutorial_task_framework.md
index d348544..f6513e7 100644
--- a/website/1.0.1/src/site/markdown/tutorial_task_framework.md
+++ b/website/1.0.1/src/site/markdown/tutorial_task_framework.md
@@ -29,9 +29,9 @@
 ![Task Framework flow chart](./images/TaskFrameworkLayers.png)
 
 ### Key Concepts
-* Task is the basic unit in Helix task framework. It can represents the a single runnable logics that user prefer to execute for each partition (distributed units).
+* Task is the smallest unit of work in Helix Task Framework. It represents a single runnable logics that user prefer to execute for each partition (distributed units).
 * Job defines one time operation across all the partitions. It contains multiple Tasks and configuration of tasks, such as how many tasks, timeout per task and so on.
-* Workflow is directed acyclic graph represents the relationships and running orders of Jobs. In addition, a workflow can also provide customized configuration, for example, Job dependencies.
+* Workflow is a directed acyclic graph that represents the relationships and running orders of Jobs. In addition, a workflow can also provide customized configuration, for example, Job dependencies.
 * JobQueue is another type of Workflow. Different from normal one, JobQueue is not terminated until user kill it. Also JobQueue can keep accepting newly coming jobs.
 
 ### Implement Your Task
@@ -71,7 +71,7 @@
 
 #### Share Content Across Tasks and Jobs
 
-Task framework also provides a feature that user can store the key-value data per task, job and workflow. The content stored at workflow layer can shared by different jobs belong to this workflow. Similarly content persisted at job layer can shared by different tasks nested in this job. Currently, user can extend the abstract class [UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java) and use two methods putUserContent and getUserContent. It will similar to hash map put and get method except a Scope.  The Scope will define which layer this key-value pair to be persisted.
+Task framework also provides a feature that user can store the key-value data per task, job and workflow. The content stored at workflow layer can shared by different jobs belong to this workflow. Similarly, content persisted at job layer can shared by different tasks nested in this job. Currently, user can extend the abstract class [UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java) and use two methods putUserContent and getUserContent. It's similar to HashMap put and get method except for the additional param Scope.  The Scope defines which layer this key-value pair to be persisted.
 
 ```
 public class MyTask extends UserContentStore implements Task {
@@ -99,7 +99,7 @@
 
 #### Task Retry and Abort
 
-Helix provides retry logics to users. User can specify the how many times allowed to tolerant failure of tasks under a job. It is a method will be introduced in Following Job Section. Another choice offered to user that if user thinks a task is very critical and do not want to do the retry once it is failed, user can return a TaskResult stated above with FATAL_FAILED status. Then Helix will not do the retry for that task.
+Helix provides retry logics to users. User can specify the number of task failures to allow under a job. It is a method will be introduced in Following Job Section. Another choice offered to user that if user thinks a task is very critical and do not want to do the retry once it is failed, user can return a TaskResult stated above with FATAL_FAILED status. Then Helix will not do the retry for that task.
 
 ```
 return new TaskResult(TaskResult.Status.FATAL_FAILED, "DO NOT WANT TO RETRY, ERROR MESSAGE");
@@ -194,7 +194,7 @@
 
 #### Add a Job
 
-WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig built, no job can be added! For creating a Job, please refering following section (Create a Job)
+WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig is built, no job can be added! For creating a Job, please refer to the following section (Create a Job)
 
 ```
 myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
@@ -202,7 +202,7 @@
 
 #### Add a Job dependency
 
-Jobs can have dependencies. If one job2 depends job1, job2 will not be scheduled until job1 finished.
+Jobs can have dependencies. If one job2 depends on job1, job2 will not be scheduled until job1 finished.
 
 ```
 myWorkflowBuilder.addParentChildDependency(ParentJobName, ChildJobName);
@@ -224,7 +224,7 @@
 | _setExpiry(long v, TimeUnit unit)_ | Set the expiration time for this workflow. |
 | _setFailureThreshold(int failureThreshold)_ | Set the failure threshold for this workflow, once job failures reach this number, the workflow will be failed. |
 | _setWorkflowType(String workflowType)_ | Set the user defined workflowType for this workflow. |
-| _setTerminable(boolean isTerminable)_ | Set the whether this workflow is terminable or not. |
+| _setTerminable(boolean isTerminable)_ | Specify whether this workflow is terminable or not. |
 | _setCapacity(int capacity)_ | Set the number of jobs that workflow can hold before reject further jobs. Only used when workflow is not terminable. |
 | _setTargetState(TargetState v)_ | Set the final state of this workflow. |
 
@@ -255,7 +255,7 @@
 
 ####Delete Job from Queue
 
-Helix allowed user to delete a job from existing queue. We offers delete API in TaskDriver to do this. Delete job from queue and this queue has to be stopped. Then user can resume the job once delete success.
+Helix allowed user to delete a job from existing queue. We offer delete API in TaskDriver to do this. The queue has to be stopped in order for a job to be deleted. User can resume the queue once deletion succeeds.
 
 ```
 taskDriver.stop("QueueName");
diff --git a/website/1.0.2/src/site/markdown/tutorial_task_framework.md b/website/1.0.2/src/site/markdown/tutorial_task_framework.md
index d348544..4721276 100644
--- a/website/1.0.2/src/site/markdown/tutorial_task_framework.md
+++ b/website/1.0.2/src/site/markdown/tutorial_task_framework.md
@@ -29,9 +29,9 @@
 ![Task Framework flow chart](./images/TaskFrameworkLayers.png)
 
 ### Key Concepts
-* Task is the basic unit in Helix task framework. It can represents the a single runnable logics that user prefer to execute for each partition (distributed units).
+* Task is the smallest unit of work in Helix Task Framework. It represents a single runnable logics that user prefer to execute for each partition (distributed units).
 * Job defines one time operation across all the partitions. It contains multiple Tasks and configuration of tasks, such as how many tasks, timeout per task and so on.
-* Workflow is directed acyclic graph represents the relationships and running orders of Jobs. In addition, a workflow can also provide customized configuration, for example, Job dependencies.
+* Workflow is a directed acyclic graph that represents the relationships and running orders of Jobs. In addition, a workflow can also provide customized configuration, for example, Job dependencies.
 * JobQueue is another type of Workflow. Different from normal one, JobQueue is not terminated until user kill it. Also JobQueue can keep accepting newly coming jobs.
 
 ### Implement Your Task
@@ -71,8 +71,7 @@
 
 #### Share Content Across Tasks and Jobs
 
-Task framework also provides a feature that user can store the key-value data per task, job and workflow. The content stored at workflow layer can shared by different jobs belong to this workflow. Similarly content persisted at job layer can shared by different tasks nested in this job. Currently, user can extend the abstract class [UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java) and use two methods putUserContent and getUserContent. It will similar to hash map put and get method except a Scope.  The Scope will define which layer this key-value pair to be persisted.
-
+Task framework also provides a feature that user can store the key-value data per task, job and workflow. The content stored at workflow layer can shared by different jobs belong to this workflow. Similarly, content persisted at job layer can shared by different tasks nested in this job. Currently, user can extend the abstract class [UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java) and use two methods putUserContent and getUserContent. It's similar to HashMap put and get method except for the additional param Scope.  The Scope defines which layer this key-value pair to be persisted.
 ```
 public class MyTask extends UserContentStore implements Task {
   @Override
@@ -99,7 +98,7 @@
 
 #### Task Retry and Abort
 
-Helix provides retry logics to users. User can specify the how many times allowed to tolerant failure of tasks under a job. It is a method will be introduced in Following Job Section. Another choice offered to user that if user thinks a task is very critical and do not want to do the retry once it is failed, user can return a TaskResult stated above with FATAL_FAILED status. Then Helix will not do the retry for that task.
+Helix provides retry logics to users. User can specify the number of task failures to allow under a job. It is a method will be introduced in Following Job Section. Another choice offered to user that if user thinks a task is very critical and do not want to do the retry once it is failed, user can return a TaskResult stated above with FATAL_FAILED status. Then Helix will not do the retry for that task.
 
 ```
 return new TaskResult(TaskResult.Status.FATAL_FAILED, "DO NOT WANT TO RETRY, ERROR MESSAGE");
@@ -194,7 +193,7 @@
 
 #### Add a Job
 
-WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig built, no job can be added! For creating a Job, please refering following section (Create a Job)
+WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig is built, no job can be added! For creating a Job, please refer to the following section (Create a Job)
 
 ```
 myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
@@ -202,7 +201,7 @@
 
 #### Add a Job dependency
 
-Jobs can have dependencies. If one job2 depends job1, job2 will not be scheduled until job1 finished.
+Jobs can have dependencies. If one job2 depends on job1, job2 will not be scheduled until job1 finished.
 
 ```
 myWorkflowBuilder.addParentChildDependency(ParentJobName, ChildJobName);
@@ -224,7 +223,7 @@
 | _setExpiry(long v, TimeUnit unit)_ | Set the expiration time for this workflow. |
 | _setFailureThreshold(int failureThreshold)_ | Set the failure threshold for this workflow, once job failures reach this number, the workflow will be failed. |
 | _setWorkflowType(String workflowType)_ | Set the user defined workflowType for this workflow. |
-| _setTerminable(boolean isTerminable)_ | Set the whether this workflow is terminable or not. |
+| _setTerminable(boolean isTerminable)_ | Specify whether this workflow is terminable or not. |
 | _setCapacity(int capacity)_ | Set the number of jobs that workflow can hold before reject further jobs. Only used when workflow is not terminable. |
 | _setTargetState(TargetState v)_ | Set the final state of this workflow. |
 
@@ -255,7 +254,7 @@
 
 ####Delete Job from Queue
 
-Helix allowed user to delete a job from existing queue. We offers delete API in TaskDriver to do this. Delete job from queue and this queue has to be stopped. Then user can resume the job once delete success.
+Helix allowed user to delete a job from existing queue. We offer delete API in TaskDriver to do this. The queue has to be stopped in order for a job to be deleted. User can resume the queue once deletion succeeds.
 
 ```
 taskDriver.stop("QueueName");