HDDS-4587. Merge remote-tracking branch 'upstream/master' into HDDS-3698 (#1822)

* HDDS-4587. Merge remote-tracking branch 'upstream/master' into HDDS-3698.

* HDDS-4587. Addressing CI failure.

* HDDS-4562. Old bucket needs to be accessible after the cluster was upgraded to the Quota version. (#1677)

Cherry picked from master to fix acceptance test failure in upgrade test. Merging again from this point would have introduced 52 new conflicts.

* HDDS-4770. Upgrade Ratis Thirdparty to 0.6.0 (#1868)

Cherry picked from master because 0.6.0-SNAPSHOT is no longer in the repos

Co-authored-by: micah zhao <micahzhao@tencent.com>
Co-authored-by: Doroszlai, Attila <6454655+adoroszlai@users.noreply.github.com>
diff --git a/.github/workflows/post-commit.yml b/.github/workflows/post-commit.yml
index 44d14c0..497fbce 100644
--- a/.github/workflows/post-commit.yml
+++ b/.github/workflows/post-commit.yml
@@ -199,6 +199,7 @@
         run: |
           mkdir -p /mnt/ozone/hadoop-ozone/dist/target
           tar xzvf hadoop-ozone*.tar.gz -C /mnt/ozone/hadoop-ozone/dist/target
+          sudo chmod -R a+rwX /mnt/ozone/hadoop-ozone/dist/target
       - name: Install robotframework
         run: sudo pip install robotframework
       - name: Execute tests
@@ -274,19 +275,12 @@
         run: ./hadoop-ozone/dev-support/checks/coverage.sh
       - name: Upload coverage to Sonar
         uses: ./.github/buildenv
-        if: github.repository == 'apache/hadoop-ozone' && github.event_name != 'pull_request'
+        if: github.repository == 'apache/ozone' && github.event_name != 'pull_request'
         with:
           args: ./hadoop-ozone/dev-support/checks/sonar.sh
         env:
           SONAR_TOKEN: ${{ secrets.SONARCLOUD_TOKEN }}
           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
-      - name: Upload coverage to Codecov
-        uses: codecov/codecov-action@v1
-        if: github.repository == 'apache/hadoop-ozone' && github.event_name != 'pull_request'
-        with:
-          file: ./target/coverage/all.xml
-          name: codecov-umbrella
-          fail_ci_if_error: false
       - name: Archive build results
         uses: actions/upload-artifact@v2
         with:
diff --git a/HISTORY.md b/HISTORY.md
index 233471c..5069238 100644
--- a/HISTORY.md
+++ b/HISTORY.md
@@ -30,12 +30,12 @@
  * Ozone: provides Object Store semantics with the help of HDDS
  * CBlock: provides mountable volumes with the help of the HDDS layer (based on iScsi protocol)
 
-In the beginning of the year 2017 a new podling project was started inside [Apache Incubator](http://incubator.apache.org/): [Apache Ratis](https://ratis.apache.org/). Ratis is an embeddable RAFT protcol implementation it is which became the corner stone of consensus inside both Ozone and HDDS projects. (Started to [be used](https://issues.apache.org/jira/browse/HDFS-11519) by Ozone in March of 2017) 
+In the beginning of the year 2017 a new podling project was started inside [Apache Incubator](http://incubator.apache.org/): [Apache Ratis](https://ratis.apache.org/). Ratis is an embeddable RAFT protocol implementation it is which became the corner stone of consensus inside both Ozone and HDDS projects. (Started to [be used](https://issues.apache.org/jira/browse/HDFS-11519) by Ozone in March of 2017) 
 
 In the October of 2017 a [discussion](https://lists.apache.org/thread.html/3b5b65ce428f88299e6cb4c5d745ec65917490be9e417d361cc08d7e@%3Chdfs-dev.hadoop.apache.org%3E) has been started on hdfs-dev mailing list to merge the existing functionality to the Apache Hadoop trunk. After a long debate Owen O'Malley [suggested a consensus](https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E) to merge it to the trunk but use separated release cycle:
 
  > * HDSL become a subproject of Hadoop.
- > * HDSL will release separately from Hadoop. Hadoop releases will notcontain HDSL and vice versa.
+ > * HDSL will release separately from Hadoop. Hadoop releases will not contain HDSL and vice versa.
  > * HDSL will get its own jira instance so that the release tags stay separate.
  > * On trunk (as opposed to release branches) HDSL will be a separate module in Hadoop's source tree. This will enable the HDSL to work on their trunk and the Hadoop trunk without making releases for every change.
  > * Hadoop's trunk will only build HDSL if a non-default profile is enabled. When Hadoop creates a release branch, the RM will delete the HDSL module from the branch.
diff --git a/README.md b/README.md
index b4a40bf..2df0f8b 100644
--- a/README.md
+++ b/README.md
@@ -27,6 +27,7 @@
  * Chat: You can find the #ozone channel on the official ASF slack. Invite link is [here](http://s.apache.org/slack-invite).
  * There are Open [Weekly calls](https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Community+Calls) where you can ask anything about Ozone.
      * Past meeting notes are also available from the wiki.
+ * Reporting security issues: Please consult with [SECURITY.md](./SECURITY.md) about reporting security vulnerabilities and issues.
 
 ## Download
 
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
index 0000000..8d8a42b
--- /dev/null
+++ b/SECURITY.md
@@ -0,0 +1,23 @@
+# Security Policy
+
+## Supported Versions
+
+The first stable release of Apache Ozone is 1.0, the previous alpha and beta releases are not supported by the community.
+
+| Version       | Supported          |
+| ------------- | ------------------ |
+| 0.3.0 (alpha) | :x:                |
+| 0.4.0 (alpha) | :x:                |
+| 0.4.1 (alpha) | :x:                |
+| 0.5.0 (beta)  | :x:                |
+| 1.0           | :white_check_mark: |
+
+## Reporting a Vulnerability
+
+To report any found security issues or vulnerabilities, please send a mail to security@ozone.apache.org, so that they may be investigated and fixed before the vulnerabilities is published.
+
+This email address is a private mailing list for discussion of potential security vulnerabilities issues.
+
+This mailing list is **NOT** for end-user questions and discussion on security. Please use the dev@ozone.apache.org list for such issues.
+
+In order to post to the list, it is **NOT** necessary to first subscribe to it.
diff --git a/hadoop-hdds/client/pom.xml b/hadoop-hdds/client/pom.xml
index e1b51e8..e3b0824 100644
--- a/hadoop-hdds/client/pom.xml
+++ b/hadoop-hdds/client/pom.xml
@@ -57,5 +57,10 @@
       <scope>test</scope>
     </dependency>
 
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-log4j12</artifactId>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
 </project>
diff --git a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/OzoneClientConfig.java b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/OzoneClientConfig.java
index b3c774a..c2e0148 100644
--- a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/OzoneClientConfig.java
+++ b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/OzoneClientConfig.java
@@ -54,6 +54,18 @@
       tags = ConfigTag.CLIENT)
   private int streamBufferSize = 4 * 1024 * 1024;
 
+  @Config(key = "stream.buffer.increment",
+      defaultValue = "0B",
+      type = ConfigType.SIZE,
+      description = "Buffer (defined by ozone.client.stream.buffer.size) "
+          + "will be incremented with this steps. If zero, the full buffer "
+          + "will "
+          + "be created at once. Setting it to a variable between 0 and "
+          + "ozone.client.stream.buffer.size can reduce the memory usage for "
+          + "very small keys, but has a performance overhead.",
+      tags = ConfigTag.CLIENT)
+  private int bufferIncrement = 0;
+
   @Config(key = "stream.buffer.flush.delay",
       defaultValue = "true",
       description = "Default true, when call flush() and determine whether "
@@ -118,6 +130,9 @@
     Preconditions.checkState(streamBufferFlushSize > 0);
     Preconditions.checkState(streamBufferMaxSize > 0);
 
+    Preconditions.checkArgument(bufferIncrement < streamBufferSize,
+        "Buffer increment should be smaller than the size of the stream "
+            + "buffer");
     Preconditions.checkState(streamBufferMaxSize % streamBufferFlushSize == 0,
         "expected max. buffer size (%s) to be a multiple of flush size (%s)",
         streamBufferMaxSize, streamBufferFlushSize);
@@ -209,4 +224,7 @@
     this.checksumVerify = checksumVerify;
   }
 
+  public int getBufferIncrement() {
+    return bufferIncrement;
+  }
 }
diff --git a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
index 6e99bf3..49f0cca 100644
--- a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
+++ b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
@@ -33,6 +33,7 @@
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicReference;
+import java.util.function.Supplier;
 import java.util.stream.Collectors;
 
 import org.apache.hadoop.hdds.HddsUtils;
@@ -51,7 +52,6 @@
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
-import com.google.common.base.Supplier;
 import org.apache.ratis.client.RaftClient;
 import org.apache.ratis.grpc.GrpcTlsConfig;
 import org.apache.ratis.proto.RaftProtos;
diff --git a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
index 3578bd6..a5f3091 100644
--- a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
+++ b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
@@ -24,8 +24,10 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 import java.util.function.Function;
 
+import org.apache.hadoop.fs.CanUnbuffer;
 import org.apache.hadoop.fs.Seekable;
 import org.apache.hadoop.hdds.client.BlockID;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
@@ -33,11 +35,13 @@
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.GetBlockResponseProto;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.XceiverClientFactory;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
 import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.client.HddsClientUtils;
 import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
+import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.security.token.OzoneBlockTokenIdentifier;
+import org.apache.hadoop.io.retry.RetryPolicy;
 import org.apache.hadoop.security.token.Token;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -50,7 +54,8 @@
  * This class encapsulates all state management for iterating
  * through the sequence of chunks through {@link ChunkInputStream}.
  */
-public class BlockInputStream extends InputStream implements Seekable {
+public class BlockInputStream extends InputStream
+    implements Seekable, CanUnbuffer {
 
   private static final Logger LOG =
       LoggerFactory.getLogger(BlockInputStream.class);
@@ -65,6 +70,9 @@
   private XceiverClientFactory xceiverClientFactory;
   private XceiverClientSpi xceiverClient;
   private boolean initialized = false;
+  private final RetryPolicy retryPolicy =
+      HddsClientUtils.createRetryPolicy(3, TimeUnit.SECONDS.toMillis(1));
+  private int retries;
 
   // List of ChunkInputStreams, one for each chunk in the block
   private List<ChunkInputStream> chunkStreams;
@@ -95,25 +103,25 @@
   // can be reset if a new position is seeked.
   private int chunkIndexOfPrevPosition;
 
-  private Function<BlockID, Pipeline> refreshPipelineFunction;
+  private final Function<BlockID, Pipeline> refreshPipelineFunction;
 
   public BlockInputStream(BlockID blockId, long blockLen, Pipeline pipeline,
       Token<OzoneBlockTokenIdentifier> token, boolean verifyChecksum,
-      XceiverClientFactory xceiverClientFctry,
+      XceiverClientFactory xceiverClientFactory,
       Function<BlockID, Pipeline> refreshPipelineFunction) {
     this.blockID = blockId;
     this.length = blockLen;
     this.pipeline = pipeline;
     this.token = token;
     this.verifyChecksum = verifyChecksum;
-    this.xceiverClientFactory = xceiverClientFctry;
+    this.xceiverClientFactory = xceiverClientFactory;
     this.refreshPipelineFunction = refreshPipelineFunction;
   }
 
   public BlockInputStream(BlockID blockId, long blockLen, Pipeline pipeline,
                           Token<OzoneBlockTokenIdentifier> token,
                           boolean verifyChecksum,
-                          XceiverClientManager xceiverClientFactory
+                          XceiverClientFactory xceiverClientFactory
   ) {
     this(blockId, blockLen, pipeline, token, verifyChecksum,
         xceiverClientFactory, null);
@@ -129,22 +137,12 @@
       return;
     }
 
-    List<ChunkInfo> chunks = null;
+    List<ChunkInfo> chunks;
     try {
       chunks = getChunkInfos();
     } catch (ContainerNotFoundException ioEx) {
-      LOG.error("Unable to read block information from pipeline.");
-      if (refreshPipelineFunction != null) {
-        LOG.debug("Re-fetching pipeline for block {}", blockID);
-        Pipeline newPipeline = refreshPipelineFunction.apply(blockID);
-        if (newPipeline == null || newPipeline.equals(pipeline)) {
-          throw ioEx;
-        } else {
-          LOG.debug("New pipeline got for block {}", blockID);
-          this.pipeline = newPipeline;
-          chunks = getChunkInfos();
-        }
-      }
+      refreshPipeline(ioEx);
+      chunks = getChunkInfos();
     }
 
     if (chunks != null && !chunks.isEmpty()) {
@@ -171,6 +169,24 @@
     }
   }
 
+  private void refreshPipeline(IOException cause) throws IOException {
+    LOG.info("Unable to read information for block {} from pipeline {}: {}",
+        blockID, pipeline.getId(), cause.getMessage());
+    if (refreshPipelineFunction != null) {
+      LOG.debug("Re-fetching pipeline for block {}", blockID);
+      Pipeline newPipeline = refreshPipelineFunction.apply(blockID);
+      if (newPipeline == null || newPipeline.sameDatanodes(pipeline)) {
+        LOG.warn("No new pipeline for block {}", blockID);
+        throw cause;
+      } else {
+        LOG.debug("New pipeline got for block {}", blockID);
+        this.pipeline = newPipeline;
+      }
+    } else {
+      throw cause;
+    }
+  }
+
   /**
    * Send RPC call to get the block info from the container.
    * @return List of chunks in this block.
@@ -182,7 +198,7 @@
       pipeline = Pipeline.newBuilder(pipeline)
           .setType(HddsProtos.ReplicationType.STAND_ALONE).build();
     }
-    xceiverClient =  xceiverClientFactory.acquireClientForReadData(pipeline);
+    acquireClient();
     boolean success = false;
     List<ChunkInfo> chunks;
     try {
@@ -207,17 +223,25 @@
     return chunks;
   }
 
+  protected void acquireClient() throws IOException {
+    xceiverClient = xceiverClientFactory.acquireClientForReadData(pipeline);
+  }
+
   /**
    * Append another ChunkInputStream to the end of the list. Note that the
    * ChunkInputStream is only created here. The chunk will be read from the
    * Datanode only when a read operation is performed on for that chunk.
    */
   protected synchronized void addStream(ChunkInfo chunkInfo) {
-    chunkStreams.add(new ChunkInputStream(chunkInfo, blockID,
-        xceiverClient, verifyChecksum, token));
+    chunkStreams.add(createChunkInputStream(chunkInfo));
   }
 
-  public synchronized long getRemaining() throws IOException {
+  protected ChunkInputStream createChunkInputStream(ChunkInfo chunkInfo) {
+    return new ChunkInputStream(chunkInfo, blockID,
+        xceiverClientFactory, () -> pipeline, verifyChecksum, token);
+  }
+
+  public synchronized long getRemaining() {
     return length - getPos();
   }
 
@@ -266,7 +290,18 @@
       // Get the current chunkStream and read data from it
       ChunkInputStream current = chunkStreams.get(chunkIndex);
       int numBytesToRead = Math.min(len, (int)current.getRemaining());
-      int numBytesRead = current.read(b, off, numBytesToRead);
+      int numBytesRead;
+      try {
+        numBytesRead = current.read(b, off, numBytesToRead);
+        retries = 0; // reset retries after successful read
+      } catch (StorageContainerException e) {
+        if (shouldRetryRead(e)) {
+          handleReadError(e);
+          continue;
+        } else {
+          throw e;
+        }
+      }
 
       if (numBytesRead != numBytesToRead) {
         // This implies that there is either data loss or corruption in the
@@ -356,7 +391,7 @@
   }
 
   @Override
-  public synchronized long getPos() throws IOException {
+  public synchronized long getPos() {
     if (length == 0) {
       return 0;
     }
@@ -376,9 +411,13 @@
 
   @Override
   public synchronized void close() {
+    releaseClient();
+    xceiverClientFactory = null;
+  }
+
+  private void releaseClient() {
     if (xceiverClientFactory != null && xceiverClient != null) {
       xceiverClientFactory.releaseClient(xceiverClient, false);
-      xceiverClientFactory = null;
       xceiverClient = null;
     }
   }
@@ -393,7 +432,7 @@
    * @throws IOException if stream is closed
    */
   protected synchronized void checkOpen() throws IOException {
-    if (xceiverClient == null) {
+    if (xceiverClientFactory == null) {
       throw new IOException("BlockInputStream has been closed.");
     }
   }
@@ -416,8 +455,44 @@
     return blockPosition;
   }
 
-  @VisibleForTesting
-  synchronized List<ChunkInputStream> getChunkStreams() {
-    return chunkStreams;
+  @Override
+  public void unbuffer() {
+    storePosition();
+    releaseClient();
+
+    final List<ChunkInputStream> inputStreams = this.chunkStreams;
+    if (inputStreams != null) {
+      for (ChunkInputStream is : inputStreams) {
+        is.unbuffer();
+      }
+    }
+  }
+
+  private synchronized void storePosition() {
+    blockPosition = getPos();
+  }
+
+  private boolean shouldRetryRead(IOException cause) throws IOException {
+    RetryPolicy.RetryAction retryAction;
+    try {
+      retryAction = retryPolicy.shouldRetry(cause, ++retries, 0, true);
+    } catch (IOException e) {
+      throw e;
+    } catch (Exception e) {
+      throw new IOException(e);
+    }
+    return retryAction.action == RetryPolicy.RetryAction.RetryDecision.RETRY;
+  }
+
+  private void handleReadError(IOException cause) throws IOException {
+    releaseClient();
+    final List<ChunkInputStream> inputStreams = this.chunkStreams;
+    if (inputStreams != null) {
+      for (ChunkInputStream is : inputStreams) {
+        is.releaseClient();
+      }
+    }
+
+    refreshPipeline(cause);
   }
 }
diff --git a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
index e29bbe3..272120b 100644
--- a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
+++ b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
@@ -46,14 +46,13 @@
 import org.apache.hadoop.ozone.common.ChecksumData;
 import org.apache.hadoop.ozone.common.ChunkBuffer;
 import org.apache.hadoop.ozone.common.OzoneChecksumException;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenIdentifier;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import static org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.putBlockAsync;
 import static org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync;
-
-import org.apache.hadoop.security.token.Token;
-import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -281,7 +280,7 @@
 
   private void allocateNewBufferIfNeeded() {
     if (currentBufferRemaining == 0) {
-      currentBuffer = bufferPool.allocateBuffer(config.getBytesPerChecksum());
+      currentBuffer = bufferPool.allocateBuffer(config.getBufferIncrement());
       currentBufferRemaining = currentBuffer.remaining();
     }
   }
diff --git a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
index cfb3a21..9c03453 100644
--- a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
+++ b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
@@ -20,14 +20,17 @@
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.CanUnbuffer;
 import org.apache.hadoop.fs.Seekable;
 import org.apache.hadoop.hdds.client.BlockID;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientFactory;
 import org.apache.hadoop.hdds.scm.XceiverClientSpi;
 import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.ozone.common.Checksum;
 import org.apache.hadoop.ozone.common.ChecksumData;
 import org.apache.hadoop.ozone.common.OzoneChecksumException;
@@ -40,18 +43,22 @@
 import java.io.InputStream;
 import java.nio.ByteBuffer;
 import java.util.List;
+import java.util.function.Supplier;
 
 /**
  * An {@link InputStream} called from BlockInputStream to read a chunk from the
  * container. Each chunk may contain multiple underlying {@link ByteBuffer}
  * instances.
  */
-public class ChunkInputStream extends InputStream implements Seekable {
+public class ChunkInputStream extends InputStream
+    implements Seekable, CanUnbuffer {
 
   private ChunkInfo chunkInfo;
   private final long length;
   private final BlockID blockID;
+  private final XceiverClientFactory xceiverClientFactory;
   private XceiverClientSpi xceiverClient;
+  private final Supplier<Pipeline> pipelineSupplier;
   private boolean verifyChecksum;
   private boolean allocated = false;
   // Buffer to store the chunk data read from the DN container
@@ -69,9 +76,8 @@
 
   // Position of the ChunkInputStream is maintained by this variable (if a
   // seek is performed. This position is w.r.t to the chunk only and not the
-  // block or key. This variable is set only if either the buffers are not
-  // yet allocated or the if the allocated buffers do not cover the seeked
-  // position. Once the chunk is read, this variable is reset.
+  // block or key. This variable is also set before attempting a read to enable
+  // retry.  Once the chunk is read, this variable is reset.
   private long chunkPosition = -1;
 
   private final Token<? extends TokenIdentifier> token;
@@ -79,17 +85,19 @@
   private static final int EOF = -1;
 
   ChunkInputStream(ChunkInfo chunkInfo, BlockID blockId,
-      XceiverClientSpi xceiverClient, boolean verifyChecksum,
-      Token<? extends TokenIdentifier> token) {
+      XceiverClientFactory xceiverClientFactory,
+      Supplier<Pipeline> pipelineSupplier,
+      boolean verifyChecksum, Token<? extends TokenIdentifier> token) {
     this.chunkInfo = chunkInfo;
     this.length = chunkInfo.getLen();
     this.blockID = blockId;
-    this.xceiverClient = xceiverClient;
+    this.xceiverClientFactory = xceiverClientFactory;
+    this.pipelineSupplier = pipelineSupplier;
     this.verifyChecksum = verifyChecksum;
     this.token = token;
   }
 
-  public synchronized long getRemaining() throws IOException {
+  public synchronized long getRemaining() {
     return length - getPos();
   }
 
@@ -98,7 +106,7 @@
    */
   @Override
   public synchronized int read() throws IOException {
-    checkOpen();
+    acquireClient();
     int available = prepareRead(1);
     int dataout = EOF;
 
@@ -143,7 +151,7 @@
     if (len == 0) {
       return 0;
     }
-    checkOpen();
+    acquireClient();
     int total = 0;
     while (len > 0) {
       int available = prepareRead(len);
@@ -196,7 +204,7 @@
   }
 
   @Override
-  public synchronized long getPos() throws IOException {
+  public synchronized long getPos() {
     if (chunkPosition >= 0) {
       return chunkPosition;
     }
@@ -219,19 +227,23 @@
 
   @Override
   public synchronized void close() {
-    if (xceiverClient != null) {
+    releaseClient();
+  }
+
+  protected synchronized void releaseClient() {
+    if (xceiverClientFactory != null && xceiverClient != null) {
+      xceiverClientFactory.releaseClient(xceiverClient, false);
       xceiverClient = null;
     }
   }
 
   /**
-   * Checks if the stream is open.  If not, throw an exception.
-   *
-   * @throws IOException if stream is closed
+   * Acquire new client if previous one was released.
    */
-  protected synchronized void checkOpen() throws IOException {
-    if (xceiverClient == null) {
-      throw new IOException("BlockInputStream has been closed.");
+  protected synchronized void acquireClient() throws IOException {
+    if (xceiverClientFactory != null && xceiverClient == null) {
+      xceiverClient = xceiverClientFactory.acquireClientForReadData(
+          pipelineSupplier.get());
     }
   }
 
@@ -292,6 +304,11 @@
       startByteIndex = bufferOffset + bufferLength;
     }
 
+    // bufferOffset and bufferLength are updated below, but if read fails
+    // and is retried, we need the previous position.  Position is reset after
+    // successful read in adjustBufferPosition()
+    storePosition();
+
     if (verifyChecksum) {
       // Update the bufferOffset and bufferLength as per the checksum
       // boundary requirement.
@@ -437,7 +454,8 @@
   /**
    * Check if the buffers have been allocated data and false otherwise.
    */
-  private boolean buffersAllocated() {
+  @VisibleForTesting
+  protected boolean buffersAllocated() {
     return buffers != null && !buffers.isEmpty();
   }
 
@@ -538,6 +556,10 @@
     this.chunkPosition = -1;
   }
 
+  private void storePosition() {
+    chunkPosition = getPos();
+  }
+
   String getChunkName() {
     return chunkInfo.getChunkName();
   }
@@ -550,4 +572,11 @@
   protected long getChunkPosition() {
     return chunkPosition;
   }
+
+  @Override
+  public synchronized void unbuffer() {
+    storePosition();
+    releaseBuffers();
+    releaseClient();
+  }
 }
diff --git a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyBlockInputStream.java b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyBlockInputStream.java
index 5db722a..1c7968b 100644
--- a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyBlockInputStream.java
+++ b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyBlockInputStream.java
@@ -24,7 +24,7 @@
 
 import org.apache.hadoop.hdds.client.BlockID;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientFactory;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.security.token.OzoneBlockTokenIdentifier;
 import org.apache.hadoop.security.token.Token;
@@ -34,9 +34,9 @@
  */
 class DummyBlockInputStream extends BlockInputStream {
 
-  private List<ChunkInfo> chunks;
+  private final List<ChunkInfo> chunks;
 
-  private Map<String, byte[]> chunkDataMap;
+  private final Map<String, byte[]> chunkDataMap;
 
   @SuppressWarnings("parameternumber")
   DummyBlockInputStream(
@@ -45,23 +45,7 @@
       Pipeline pipeline,
       Token<OzoneBlockTokenIdentifier> token,
       boolean verifyChecksum,
-      XceiverClientManager xceiverClientManager,
-      List<ChunkInfo> chunkList,
-      Map<String, byte[]> chunkMap) {
-    super(blockId, blockLen, pipeline, token, verifyChecksum,
-        xceiverClientManager);
-    this.chunks = chunkList;
-    this.chunkDataMap = chunkMap;
-  }
-
-  @SuppressWarnings("parameternumber")
-  DummyBlockInputStream(
-      BlockID blockId,
-      long blockLen,
-      Pipeline pipeline,
-      Token<OzoneBlockTokenIdentifier> token,
-      boolean verifyChecksum,
-      XceiverClientManager xceiverClientManager,
+      XceiverClientFactory xceiverClientManager,
       Function<BlockID, Pipeline> refreshFunction,
       List<ChunkInfo> chunkList,
       Map<String, byte[]> chunks) {
@@ -78,11 +62,10 @@
   }
 
   @Override
-  protected void addStream(ChunkInfo chunkInfo) {
-    TestChunkInputStream testChunkInputStream = new TestChunkInputStream();
-    getChunkStreams().add(new DummyChunkInputStream(testChunkInputStream,
+  protected ChunkInputStream createChunkInputStream(ChunkInfo chunkInfo) {
+    return new DummyChunkInputStream(
         chunkInfo, null, null, false,
-        chunkDataMap.get(chunkInfo.getChunkName()).clone()));
+        chunkDataMap.get(chunkInfo.getChunkName()).clone(), null);
   }
 
   @Override
diff --git a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyBlockInputStreamWithRetry.java b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyBlockInputStreamWithRetry.java
index 1686ed4..51ba2c6 100644
--- a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyBlockInputStreamWithRetry.java
+++ b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyBlockInputStreamWithRetry.java
@@ -26,7 +26,7 @@
 import org.apache.hadoop.hdds.client.BlockID;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
+import org.apache.hadoop.hdds.scm.XceiverClientFactory;
 import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
@@ -49,7 +49,7 @@
       Pipeline pipeline,
       Token<OzoneBlockTokenIdentifier> token,
       boolean verifyChecksum,
-      XceiverClientManager xceiverClientManager,
+      XceiverClientFactory xceiverClientManager,
       List<ChunkInfo> chunkList,
       Map<String, byte[]> chunkMap,
       AtomicBoolean isRerfreshed) {
diff --git a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyChunkInputStream.java b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyChunkInputStream.java
index e654d11..15f7eda 100644
--- a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyChunkInputStream.java
+++ b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/DummyChunkInputStream.java
@@ -22,8 +22,9 @@
 
 import org.apache.hadoop.hdds.client.BlockID;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
-import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.XceiverClientFactory;
 
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 
 /**
@@ -31,18 +32,18 @@
  */
 public class DummyChunkInputStream extends ChunkInputStream {
 
-  private byte[] chunkData;
+  private final byte[] chunkData;
 
   // Stores the read chunk data in each readChunk call
-  private List<ByteString> readByteBuffers = new ArrayList<>();
+  private final List<ByteString> readByteBuffers = new ArrayList<>();
 
-  public DummyChunkInputStream(TestChunkInputStream testChunkInputStream,
-      ChunkInfo chunkInfo,
+  public DummyChunkInputStream(ChunkInfo chunkInfo,
       BlockID blockId,
-      XceiverClientSpi xceiverClient,
+      XceiverClientFactory xceiverClientFactory,
       boolean verifyChecksum,
-      byte[] data) {
-    super(chunkInfo, blockId, xceiverClient, verifyChecksum, null);
+      byte[] data, Pipeline pipeline) {
+    super(chunkInfo, blockId, xceiverClientFactory, () -> pipeline,
+        verifyChecksum, null);
     this.chunkData = data;
   }
 
@@ -56,10 +57,15 @@
   }
 
   @Override
-  protected void checkOpen() {
+  protected void acquireClient() {
     // No action needed
   }
 
+  @Override
+  protected void releaseClient() {
+    // no-op
+  }
+
   public List<ByteString> getReadByteBuffers() {
     return readByteBuffers;
   }
diff --git a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestBlockInputStream.java b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestBlockInputStream.java
index 3f5e12a..940caa7 100644
--- a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestBlockInputStream.java
+++ b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestBlockInputStream.java
@@ -21,27 +21,50 @@
 import com.google.common.primitives.Bytes;
 import org.apache.hadoop.hdds.client.BlockID;
 import org.apache.hadoop.hdds.client.ContainerBlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import org.apache.hadoop.hdds.scm.XceiverClientFactory;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.hdds.scm.pipeline.MockPipeline;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.ozone.common.Checksum;
 
+import org.apache.hadoop.ozone.common.OzoneChecksumException;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.mockito.Mock;
+import org.mockito.junit.MockitoJUnitRunner;
 
 import java.io.EOFException;
+import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
 import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.function.Function;
 
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.CONTAINER_NOT_FOUND;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.CONTAINER_UNHEALTHY;
 import static org.apache.hadoop.hdds.scm.storage.TestChunkInputStream.generateRandomData;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.anyInt;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
 
 /**
  * Tests for {@link BlockInputStream}'s functionality.
  */
+@RunWith(MockitoJUnitRunner.class)
 public class TestBlockInputStream {
 
   private static final int CHUNK_SIZE = 100;
@@ -52,7 +75,9 @@
   private int blockSize;
   private List<ChunkInfo> chunks;
   private Map<String, byte[]> chunkDataMap;
-  private AtomicBoolean isRefreshed = new AtomicBoolean();
+
+  @Mock
+  private Function<BlockID, Pipeline> refreshPipeline;
 
   @Before
   public void setup() throws Exception {
@@ -61,7 +86,7 @@
     createChunkList(5);
 
     blockStream = new DummyBlockInputStream(blockID, blockSize, null, null,
-        false, null, chunks, chunkDataMap);
+        false, null, refreshPipeline, chunks, chunkDataMap);
   }
 
   /**
@@ -199,9 +224,11 @@
   @Test
   public void testRefreshPipelineFunction() throws Exception {
     BlockID blockID = new BlockID(new ContainerBlockID(1, 1));
+    AtomicBoolean isRefreshed = new AtomicBoolean();
     createChunkList(5);
     BlockInputStream blockInputStreamWithRetry =
-        new DummyBlockInputStreamWithRetry(blockID, blockSize, null, null,
+        new DummyBlockInputStreamWithRetry(blockID, blockSize,
+            MockPipeline.createSingleNodePipeline(), null,
             false, null, chunks, chunkDataMap, isRefreshed);
 
     Assert.assertFalse(isRefreshed.get());
@@ -210,4 +237,160 @@
     blockInputStreamWithRetry.read(b, 0, 200);
     Assert.assertTrue(isRefreshed.get());
   }
+
+  @Test
+  public void testRefreshOnReadFailure() throws Exception {
+    // GIVEN
+    BlockID blockID = new BlockID(new ContainerBlockID(1, 1));
+    Pipeline pipeline = MockPipeline.createSingleNodePipeline();
+    Pipeline newPipeline = MockPipeline.createSingleNodePipeline();
+
+    final int len = 200;
+    final ChunkInputStream stream = mock(ChunkInputStream.class);
+    when(stream.read(any(), anyInt(), anyInt()))
+        .thenThrow(new StorageContainerException("test", CONTAINER_NOT_FOUND))
+        .thenReturn(len);
+    when(stream.getRemaining())
+        .thenReturn((long) len);
+
+    when(refreshPipeline.apply(blockID))
+        .thenReturn(newPipeline);
+
+    BlockInputStream subject = new DummyBlockInputStream(blockID, blockSize,
+        pipeline, null, false, null, refreshPipeline, chunks, null) {
+      @Override
+      protected ChunkInputStream createChunkInputStream(ChunkInfo chunkInfo) {
+        return stream;
+      }
+    };
+    subject.initialize();
+
+    // WHEN
+    byte[] b = new byte[len];
+    int bytesRead = subject.read(b, 0, len);
+
+    // THEN
+    Assert.assertEquals(len, bytesRead);
+    verify(refreshPipeline).apply(blockID);
+  }
+
+  @Test
+  public void testRefreshExitsIfPipelineHasSameNodes() throws Exception {
+    // GIVEN
+    BlockID blockID = new BlockID(new ContainerBlockID(1, 1));
+    Pipeline pipeline = MockPipeline.createSingleNodePipeline();
+
+    final int len = 200;
+    final ChunkInputStream stream = mock(ChunkInputStream.class);
+    when(stream.read(any(), anyInt(), anyInt()))
+        .thenThrow(new StorageContainerException("test", CONTAINER_UNHEALTHY));
+    when(stream.getRemaining())
+        .thenReturn((long) len);
+
+    when(refreshPipeline.apply(blockID))
+        .thenAnswer(invocation -> samePipelineWithNewId(pipeline));
+
+    BlockInputStream subject = new DummyBlockInputStream(blockID, blockSize,
+        pipeline, null, false, null, refreshPipeline, chunks, null) {
+      @Override
+      protected ChunkInputStream createChunkInputStream(ChunkInfo chunkInfo) {
+        return stream;
+      }
+    };
+    subject.initialize();
+
+    // WHEN
+    byte[] b = new byte[len];
+    LambdaTestUtils.intercept(StorageContainerException.class,
+        () -> subject.read(b, 0, len));
+
+    // THEN
+    verify(refreshPipeline).apply(blockID);
+  }
+
+  @Test
+  public void testReadNotRetriedOnOtherException() throws Exception {
+    // GIVEN
+    BlockID blockID = new BlockID(new ContainerBlockID(1, 1));
+    Pipeline pipeline = MockPipeline.createSingleNodePipeline();
+
+    final int len = 200;
+    final ChunkInputStream stream = mock(ChunkInputStream.class);
+    when(stream.read(any(), anyInt(), anyInt()))
+        .thenThrow(new OzoneChecksumException("checksum missing"));
+    when(stream.getRemaining())
+        .thenReturn((long) len);
+
+    BlockInputStream subject = new DummyBlockInputStream(blockID, blockSize,
+        pipeline, null, false, null, refreshPipeline, chunks, null) {
+      @Override
+      protected ChunkInputStream createChunkInputStream(ChunkInfo chunkInfo) {
+        return stream;
+      }
+    };
+    subject.initialize();
+
+    // WHEN
+    byte[] b = new byte[len];
+    LambdaTestUtils.intercept(OzoneChecksumException.class,
+        () -> subject.read(b, 0, len));
+
+    // THEN
+    verify(refreshPipeline, never()).apply(blockID);
+  }
+
+  private Pipeline samePipelineWithNewId(Pipeline pipeline) {
+    List<DatanodeDetails> reverseOrder = new ArrayList<>(pipeline.getNodes());
+    Collections.reverse(reverseOrder);
+    return MockPipeline.createPipeline(reverseOrder);
+  }
+
+  @Test
+  public void testRefreshOnReadFailureAfterUnbuffer() throws Exception {
+    // GIVEN
+    BlockID blockID = new BlockID(new ContainerBlockID(1, 1));
+    Pipeline pipeline = MockPipeline.createSingleNodePipeline();
+    Pipeline newPipeline = MockPipeline.createSingleNodePipeline();
+    XceiverClientFactory clientFactory = mock(XceiverClientFactory.class);
+    XceiverClientSpi client = mock(XceiverClientSpi.class);
+    when(clientFactory.acquireClientForReadData(pipeline))
+        .thenReturn(client);
+
+    final int len = 200;
+    final ChunkInputStream stream = mock(ChunkInputStream.class);
+    when(stream.read(any(), anyInt(), anyInt()))
+        .thenThrow(new StorageContainerException("test", CONTAINER_NOT_FOUND))
+        .thenReturn(len);
+    when(stream.getRemaining())
+        .thenReturn((long) len);
+
+    when(refreshPipeline.apply(blockID))
+        .thenReturn(newPipeline);
+
+    BlockInputStream subject = new BlockInputStream(blockID, blockSize,
+        pipeline, null, false, clientFactory, refreshPipeline) {
+      @Override
+      protected List<ChunkInfo> getChunkInfos() throws IOException {
+        acquireClient();
+        return chunks;
+      }
+
+      @Override
+      protected ChunkInputStream createChunkInputStream(ChunkInfo chunkInfo) {
+        return stream;
+      }
+    };
+    subject.initialize();
+    subject.unbuffer();
+
+    // WHEN
+    byte[] b = new byte[len];
+    int bytesRead = subject.read(b, 0, len);
+
+    // THEN
+    Assert.assertEquals(len, bytesRead);
+    verify(refreshPipeline).apply(blockID);
+    verify(clientFactory).acquireClientForReadData(pipeline);
+    verify(clientFactory).releaseClient(client, false);
+  }
 }
diff --git a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestChunkInputStream.java b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestChunkInputStream.java
index eea8e1f..cb110b1 100644
--- a/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestChunkInputStream.java
+++ b/hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestChunkInputStream.java
@@ -20,16 +20,26 @@
 
 import java.io.EOFException;
 import java.util.Random;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import org.apache.hadoop.hdds.scm.XceiverClientFactory;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.pipeline.MockPipeline;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.ozone.common.Checksum;
 import org.apache.hadoop.test.GenericTestUtils;
 
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
 /**
  * Tests for {@link ChunkInputStream}'s functionality.
  */
@@ -59,8 +69,8 @@
             chunkData, 0, CHUNK_SIZE).getProtoBufMessage())
         .build();
 
-    chunkStream =
-        new DummyChunkInputStream(this, chunkInfo, null, null, true, chunkData);
+    chunkStream = new DummyChunkInputStream(chunkInfo, null, null, true,
+        chunkData, null);
   }
 
   static byte[] generateRandomData(int length) {
@@ -174,4 +184,50 @@
     chunkStream.read(b2, 0, 20);
     matchWithInputData(b2, 70, 20);
   }
-}
\ No newline at end of file
+
+  @Test
+  public void testUnbuffer() throws Exception {
+    byte[] b1 = new byte[20];
+    chunkStream.read(b1, 0, 20);
+    matchWithInputData(b1, 0, 20);
+
+    chunkStream.unbuffer();
+
+    Assert.assertFalse(chunkStream.buffersAllocated());
+
+    // Next read should start from the position of the last read + 1 i.e. 20
+    byte[] b2 = new byte[20];
+    chunkStream.read(b2, 0, 20);
+    matchWithInputData(b2, 20, 20);
+  }
+
+  @Test
+  public void connectsToNewPipeline() throws Exception {
+    // GIVEN
+    Pipeline pipeline = MockPipeline.createSingleNodePipeline();
+    Pipeline newPipeline = MockPipeline.createSingleNodePipeline();
+    XceiverClientFactory clientFactory = mock(XceiverClientFactory.class);
+    XceiverClientSpi client = mock(XceiverClientSpi.class);
+    when(clientFactory.acquireClientForReadData(pipeline))
+        .thenReturn(client);
+
+    AtomicReference<Pipeline> pipelineRef = new AtomicReference<>(pipeline);
+
+    ChunkInputStream subject = new ChunkInputStream(chunkInfo, null,
+        clientFactory, pipelineRef::get, false, null) {
+      @Override
+      protected ByteString readChunk(ChunkInfo readChunkInfo) {
+        return ByteString.copyFrom(chunkData);
+      }
+    };
+
+    // WHEN
+    subject.unbuffer();
+    pipelineRef.set(newPipeline);
+    int b = subject.read();
+
+    // THEN
+    Assert.assertNotEquals(-1, b);
+    verify(clientFactory).acquireClientForReadData(newPipeline);
+  }
+}
diff --git a/hadoop-hdds/client/src/test/resources/log4j.properties b/hadoop-hdds/client/src/test/resources/log4j.properties
new file mode 100644
index 0000000..bb5cbe5
--- /dev/null
+++ b/hadoop-hdds/client/src/test/resources/log4j.properties
@@ -0,0 +1,23 @@
+#
+#   Licensed to the Apache Software Foundation (ASF) under one or more
+#   contributor license agreements.  See the NOTICE file distributed with
+#   this work for additional information regarding copyright ownership.
+#   The ASF licenses this file to You under the Apache License, Version 2.0
+#   (the "License"); you may not use this file except in compliance with
+#   the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#   Unless required by applicable law or agreed to in writing, software
+#   distributed under the License is distributed on an "AS IS" BASIS,
+#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#   See the License for the specific language governing permissions and
+#   limitations under the License.
+#
+# log4j configuration used during build and unit tests
+
+log4j.rootLogger=INFO,stdout
+log4j.threshold=ALL
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
index b778e03..56a87e8 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
@@ -43,13 +43,25 @@
   public static final String OZONE_QUOTA_TB = "TB";
 
   /** Quota Units.*/
-  public enum Units {UNDEFINED, BYTES, KB, MB, GB, TB}
+  public enum Units {BYTES, KB, MB, GB, TB}
 
   // Quota to decide how many buckets can be created.
-  private long quotaInCounts;
+  private long quotaInNamespace;
   // Quota to decide how many storage space will be used in bytes.
   private long quotaInBytes;
   private RawQuotaInBytes rawQuotaInBytes;
+  // Data class of Quota.
+  private static QuotaList quotaList;
+
+  /** Setting QuotaList parameters from large to small. */
+  static {
+    quotaList = new QuotaList();
+    quotaList.addQuotaList(OZONE_QUOTA_TB, Units.TB, TB);
+    quotaList.addQuotaList(OZONE_QUOTA_GB, Units.GB, GB);
+    quotaList.addQuotaList(OZONE_QUOTA_MB, Units.MB, MB);
+    quotaList.addQuotaList(OZONE_QUOTA_KB, Units.KB, KB);
+    quotaList.addQuotaList(OZONE_QUOTA_BYTES, Units.BYTES, 1L);
+  }
 
   /**
    * Used to convert user input values into bytes such as: 1MB-> 1048576.
@@ -72,24 +84,17 @@
     }
 
     /**
-     * Returns size in Bytes or -1 if there is no Quota.
+     * Returns size in Bytes or negative num if there is no Quota.
      */
     public long sizeInBytes() {
-      switch (this.unit) {
-      case BYTES:
-        return this.getSize();
-      case KB:
-        return this.getSize() * KB;
-      case MB:
-        return this.getSize() * MB;
-      case GB:
-        return this.getSize() * GB;
-      case TB:
-        return this.getSize() * TB;
-      case UNDEFINED:
-      default:
-        return -1;
+      long sQuota = -1L;
+      for (Units quota : quotaList.getUnitQuotaArray()) {
+        if (quota == this.unit) {
+          sQuota = quotaList.getQuotaSize(quota);
+          break;
+        }
       }
+      return this.getSize() * sQuota;
     }
 
     @Override
@@ -120,11 +125,11 @@
   /**
    * Constructor for Ozone Quota.
    *
-   * @param quotaInCounts Volume quota in counts
+   * @param quotaInNamespace Volume quota in counts
    * @param rawQuotaInBytes RawQuotaInBytes value
    */
-  private OzoneQuota(long quotaInCounts, RawQuotaInBytes rawQuotaInBytes) {
-    this.quotaInCounts = quotaInCounts;
+  private OzoneQuota(long quotaInNamespace, RawQuotaInBytes rawQuotaInBytes) {
+    this.quotaInNamespace = quotaInNamespace;
     this.rawQuotaInBytes = rawQuotaInBytes;
     this.quotaInBytes = rawQuotaInBytes.sizeInBytes();
   }
@@ -144,12 +149,12 @@
    * Quota Object.
    *
    * @param quotaInBytes Volume quota in bytes
-   * @param quotaInCounts Volume quota in counts
+   * @param quotaInNamespace Volume quota in counts
    *
    * @return OzoneQuota object
    */
   public static OzoneQuota parseQuota(String quotaInBytes,
-      long quotaInCounts) {
+      long quotaInNamespace) {
 
     if (Strings.isNullOrEmpty(quotaInBytes)) {
       throw new IllegalArgumentException(
@@ -164,46 +169,22 @@
     long quotaMultiplyExact = 0;
 
     try {
-      if (uppercase.endsWith(OZONE_QUOTA_KB)) {
-        size = uppercase
-            .substring(0, uppercase.length() - OZONE_QUOTA_KB.length());
-        currUnit = Units.KB;
-        quotaMultiplyExact = Math.multiplyExact(Long.parseLong(size), KB);
-      }
-
-      if (uppercase.endsWith(OZONE_QUOTA_MB)) {
-        size = uppercase
-            .substring(0, uppercase.length() - OZONE_QUOTA_MB.length());
-        currUnit = Units.MB;
-        quotaMultiplyExact = Math.multiplyExact(Long.parseLong(size), MB);
-      }
-
-      if (uppercase.endsWith(OZONE_QUOTA_GB)) {
-        size = uppercase
-            .substring(0, uppercase.length() - OZONE_QUOTA_GB.length());
-        currUnit = Units.GB;
-        quotaMultiplyExact = Math.multiplyExact(Long.parseLong(size), GB);
-      }
-
-      if (uppercase.endsWith(OZONE_QUOTA_TB)) {
-        size = uppercase
-            .substring(0, uppercase.length() - OZONE_QUOTA_TB.length());
-        currUnit = Units.TB;
-        quotaMultiplyExact = Math.multiplyExact(Long.parseLong(size), TB);
-      }
-
-      if (uppercase.endsWith(OZONE_QUOTA_BYTES)) {
-        size = uppercase
-            .substring(0, uppercase.length() - OZONE_QUOTA_BYTES.length());
-        currUnit = Units.BYTES;
-        quotaMultiplyExact = Math.multiplyExact(Long.parseLong(size), 1L);
+      for (String quota : quotaList.getOzoneQuotaArray()) {
+        if (uppercase.endsWith((quota))) {
+          size = uppercase
+              .substring(0, uppercase.length() - quota.length());
+          currUnit = quotaList.getUnits(quota);
+          quotaMultiplyExact = Math.multiplyExact(Long.parseLong(size),
+              quotaList.getQuotaSize(currUnit));
+          break;
+        }
       }
       nSize = Long.parseLong(size);
     } catch (NumberFormatException e) {
       throw new IllegalArgumentException("Invalid values for quota, to ensure" +
-          " that the Quota format is legal(supported values are BYTES, KB, " +
-          "MB, GB and TB).");
-    } catch  (ArithmeticException e) {
+          " that the Quota format is legal(supported values are BYTES, " +
+          " KB, MB, GB and TB).");
+    } catch (ArithmeticException e) {
       LOG.debug("long overflow:\n{}", quotaMultiplyExact);
       throw new IllegalArgumentException("Invalid values for quota, the quota" +
           " value cannot be greater than Long.MAX_VALUE BYTES");
@@ -213,7 +194,7 @@
       throw new IllegalArgumentException("Quota cannot be negative.");
     }
 
-    return new OzoneQuota(quotaInCounts,
+    return new OzoneQuota(quotaInNamespace,
         new RawQuotaInBytes(currUnit, nSize));
   }
 
@@ -222,35 +203,25 @@
    * Returns OzoneQuota corresponding to size in bytes.
    *
    * @param quotaInBytes in bytes to be converted
-   * @param quotaInCounts in counts to be converted
+   * @param quotaInNamespace in counts to be converted
    *
    * @return OzoneQuota object
    */
   public static OzoneQuota getOzoneQuota(long quotaInBytes,
-      long quotaInCounts) {
-    long size;
-    Units unit;
-    if (quotaInBytes % TB == 0) {
-      size = quotaInBytes / TB;
-      unit = Units.TB;
-    } else if (quotaInBytes % GB == 0) {
-      size = quotaInBytes / GB;
-      unit = Units.GB;
-    } else if (quotaInBytes % MB == 0) {
-      size = quotaInBytes / MB;
-      unit = Units.MB;
-    } else if (quotaInBytes % KB == 0) {
-      size = quotaInBytes / KB;
-      unit = Units.KB;
-    } else {
-      size = quotaInBytes;
-      unit = Units.BYTES;
+      long quotaInNamespace) {
+    long size = 1L;
+    Units unit = Units.BYTES;
+    for (Long quota : quotaList.getSizeQuotaArray()) {
+      if (quotaInBytes % quota == 0) {
+        size = quotaInBytes / quota;
+        unit = quotaList.getQuotaUnit(quota);
+      }
     }
-    return new OzoneQuota(quotaInCounts, new RawQuotaInBytes(unit, size));
+    return new OzoneQuota(quotaInNamespace, new RawQuotaInBytes(unit, size));
   }
 
-  public long getQuotaInCounts() {
-    return quotaInCounts;
+  public long getQuotaInNamespace() {
+    return quotaInNamespace;
   }
 
   public long getQuotaInBytes() {
@@ -260,6 +231,6 @@
   @Override
   public String toString() {
     return "Space Bytes Quota: " + rawQuotaInBytes.toString() + "\n" +
-        "Counts Quota: " + quotaInCounts;
+        "Counts Quota: " + quotaInNamespace;
   }
 }
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/QuotaList.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/QuotaList.java
new file mode 100644
index 0000000..205cca1
--- /dev/null
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/QuotaList.java
@@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.client;
+
+import java.util.ArrayList;
+
+/**
+ *This class contains arraylist for storage constant used in OzoneQuota.
+ */
+public class QuotaList {
+  private ArrayList<String> ozoneQuota;
+  private ArrayList<OzoneQuota.Units> unitQuota;
+  private ArrayList<Long> sizeQuota;
+
+  public QuotaList(){
+    ozoneQuota = new ArrayList<String>();
+    unitQuota = new ArrayList<OzoneQuota.Units>();
+    sizeQuota = new ArrayList<Long>();
+  }
+
+  public void addQuotaList(String oQuota, OzoneQuota.Units uQuota, Long sQuota){
+    ozoneQuota.add(oQuota);
+    unitQuota.add(uQuota);
+    sizeQuota.add(sQuota);
+  }
+
+  public ArrayList<String> getOzoneQuotaArray() {
+    return this.ozoneQuota;
+  }
+
+  public ArrayList<Long> getSizeQuotaArray() {
+    return this.sizeQuota;
+  }
+
+  public ArrayList<OzoneQuota.Units> getUnitQuotaArray() {
+    return this.unitQuota;
+  }
+
+  public OzoneQuota.Units getUnits(String oQuota){
+    return unitQuota.get(ozoneQuota.indexOf(oQuota));
+  }
+
+  public Long getQuotaSize(OzoneQuota.Units uQuota){
+    return sizeQuota.get(unitQuota.indexOf(uQuota));
+  }
+
+  public OzoneQuota.Units getQuotaUnit(Long sQuota){
+    return unitQuota.get(sizeQuota.indexOf(sQuota));
+  }
+
+}
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
index 9cfe0f6..d73c605 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
@@ -38,9 +38,13 @@
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource;
 
 import com.google.common.base.Preconditions;
+import org.apache.ratis.server.RaftServerConfigKeys;
+
+import static org.apache.hadoop.hdds.ratis.RatisHelper.HDDS_DATANODE_RATIS_PREFIX_KEY;
 
 /**
  * Configuration for ozone.
@@ -49,6 +53,8 @@
 public class OzoneConfiguration extends Configuration
     implements MutableConfigurationSource {
   static {
+    addDeprecatedKeys();
+
     activate();
   }
 
@@ -287,4 +293,15 @@
     }
     return configMap;
   }
+
+  private static void addDeprecatedKeys(){
+    Configuration.addDeprecations(new DeprecationDelta[]{
+        new DeprecationDelta("ozone.datanode.pipeline.limit",
+            ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT),
+        new DeprecationDelta(HDDS_DATANODE_RATIS_PREFIX_KEY + "."
+           + RaftServerConfigKeys.PREFIX + "." + "rpcslowness.timeout",
+           HDDS_DATANODE_RATIS_PREFIX_KEY + "."
+           + RaftServerConfigKeys.PREFIX + "." + "rpc.slowness.timeout")
+    });
+  }
 }
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
index 1a42f3a..60fedd2 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/DatanodeDetails.java
@@ -18,17 +18,19 @@
 
 package org.apache.hadoop.hdds.protocol;
 
-import com.google.common.base.Preconditions;
-import com.google.common.base.Strings;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
 import org.apache.hadoop.hdds.annotation.InterfaceStability;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails.Port.Name;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.net.NetConstants;
 import org.apache.hadoop.hdds.scm.net.NodeImpl;
 
-import java.util.ArrayList;
-import java.util.List;
-import java.util.UUID;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
 
 /**
  * DatanodeDetails class contains details about DataNode like:
@@ -57,6 +59,8 @@
   private long setupTime;
   private String revision;
   private String buildDate;
+  private HddsProtos.NodeOperationalState persistedOpState;
+  private long persistedOpStateExpiryEpochSec = 0;
 
   /**
    * Constructs DatanodeDetails instance. DatanodeDetails.Builder is used
@@ -71,11 +75,16 @@
    * @param setupTime the setup time of DataNode
    * @param revision DataNodes's revision
    * @param buildDate DataNodes's build timestamp
+   * @param persistedOpState Operational State stored on DN.
+   * @param persistedOpStateExpiryEpochSec Seconds after the epoch the stored
+   *                                       state should expire.
    */
   @SuppressWarnings("parameternumber")
   private DatanodeDetails(UUID uuid, String ipAddress, String hostName,
       String networkLocation, List<Port> ports, String certSerialId,
-      String version, long setupTime, String revision, String buildDate) {
+      String version, long setupTime, String revision, String buildDate,
+      HddsProtos.NodeOperationalState persistedOpState,
+      long persistedOpStateExpiryEpochSec) {
     super(hostName, networkLocation, NetConstants.NODE_COST_DEFAULT);
     this.uuid = uuid;
     this.uuidString = uuid.toString();
@@ -87,6 +96,8 @@
     this.setupTime = setupTime;
     this.revision = revision;
     this.buildDate = buildDate;
+    this.persistedOpState = persistedOpState;
+    this.persistedOpStateExpiryEpochSec = persistedOpStateExpiryEpochSec;
   }
 
   public DatanodeDetails(DatanodeDetails datanodeDetails) {
@@ -103,6 +114,9 @@
     this.setupTime = datanodeDetails.setupTime;
     this.revision = datanodeDetails.revision;
     this.buildDate = datanodeDetails.buildDate;
+    this.persistedOpState = datanodeDetails.getPersistedOpState();
+    this.persistedOpStateExpiryEpochSec =
+        datanodeDetails.getPersistedOpStateExpiryEpochSec();
   }
 
   /**
@@ -171,6 +185,10 @@
     ports.add(port);
   }
 
+  public void setPort(Name name, int port) {
+    setPort(new Port(name, port));
+  }
+
   /**
    * Returns all the Ports used by DataNode.
    *
@@ -181,6 +199,46 @@
   }
 
   /**
+   * Return the persistedOpState. If the stored value is null, return the
+   * default value of IN_SERVICE.
+   *
+   * @return The OperationalState persisted on the datanode.
+   */
+  public HddsProtos.NodeOperationalState getPersistedOpState() {
+    if (persistedOpState == null) {
+      return HddsProtos.NodeOperationalState.IN_SERVICE;
+    } else {
+      return persistedOpState;
+    }
+  }
+
+  /**
+   * Set the persistedOpState for this instance.
+   *
+   * @param state The new operational state.
+   */
+  public void setPersistedOpState(HddsProtos.NodeOperationalState state) {
+    this.persistedOpState = state;
+  }
+
+  /**
+   * Get the persistedOpStateExpiryEpochSec for the instance.
+   * @return Seconds from the epoch when the operational state should expire.
+   */
+  public long getPersistedOpStateExpiryEpochSec() {
+    return persistedOpStateExpiryEpochSec;
+  }
+
+  /**
+   * Set persistedOpStateExpiryEpochSec.
+   * @param expiry The number of second after the epoch the operational state
+   *               should expire.
+   */
+  public void setPersistedOpStateExpiryEpochSec(long expiry) {
+    this.persistedOpStateExpiryEpochSec = expiry;
+  }
+
+  /**
    * Given the name returns port number, null if the asked port is not found.
    *
    * @param name Name of the port
@@ -231,6 +289,13 @@
     if (datanodeDetailsProto.hasNetworkLocation()) {
       builder.setNetworkLocation(datanodeDetailsProto.getNetworkLocation());
     }
+    if (datanodeDetailsProto.hasPersistedOpState()) {
+      builder.setPersistedOpState(datanodeDetailsProto.getPersistedOpState());
+    }
+    if (datanodeDetailsProto.hasPersistedOpStateExpiry()) {
+      builder.setPersistedOpStateExpiry(
+          datanodeDetailsProto.getPersistedOpStateExpiry());
+    }
     return builder.build();
   }
 
@@ -294,6 +359,10 @@
     if (!Strings.isNullOrEmpty(getNetworkLocation())) {
       builder.setNetworkLocation(getNetworkLocation());
     }
+    if (persistedOpState != null) {
+      builder.setPersistedOpState(persistedOpState);
+    }
+    builder.setPersistedOpStateExpiry(persistedOpStateExpiryEpochSec);
 
     for (Port port : ports) {
       builder.addPorts(HddsProtos.Port.newBuilder()
@@ -342,6 +411,8 @@
         ", networkLocation: " +
         getNetworkLocation() +
         ", certSerialId: " + certSerialId +
+        ", persistedOpState: " + persistedOpState +
+        ", persistedOpStateExpiryEpochSec: " + persistedOpStateExpiryEpochSec +
         "}";
   }
 
@@ -385,6 +456,8 @@
     private long setupTime;
     private String revision;
     private String buildDate;
+    private HddsProtos.NodeOperationalState persistedOpState;
+    private long persistedOpStateExpiryEpochSec = 0;
 
     /**
      * Default private constructor. To create Builder instance use
@@ -412,6 +485,9 @@
       this.setupTime = details.getSetupTime();
       this.revision = details.getRevision();
       this.buildDate = details.getBuildDate();
+      this.persistedOpState = details.getPersistedOpState();
+      this.persistedOpStateExpiryEpochSec =
+          details.getPersistedOpStateExpiryEpochSec();
       return this;
     }
 
@@ -542,6 +618,31 @@
       return this;
     }
 
+    /*
+     * Adds persistedOpState.
+     *
+     * @param state The operational state persisted on the datanode
+     *
+     * @return DatanodeDetails.Builder
+     */
+    public Builder setPersistedOpState(HddsProtos.NodeOperationalState state){
+      this.persistedOpState = state;
+      return this;
+    }
+
+    /*
+     * Adds persistedOpStateExpiryEpochSec.
+     *
+     * @param expiry The seconds after the epoch the operational state should
+     *              expire.
+     *
+     * @return DatanodeDetails.Builder
+     */
+    public Builder setPersistedOpStateExpiry(long expiry){
+      this.persistedOpStateExpiryEpochSec = expiry;
+      return this;
+    }
+
     /**
      * Builds and returns DatanodeDetails instance.
      *
@@ -553,8 +654,8 @@
         networkLocation = NetConstants.DEFAULT_RACK;
       }
       DatanodeDetails dn = new DatanodeDetails(id, ipAddress, hostName,
-          networkLocation, ports, certSerialId,
-          version, setupTime, revision, buildDate);
+          networkLocation, ports, certSerialId, version, setupTime, revision,
+          buildDate, persistedOpState, persistedOpStateExpiryEpochSec);
       if (networkName != null) {
         dn.setNetworkName(networkName);
       }
@@ -583,7 +684,7 @@
      * Ports that are supported in DataNode.
      */
     public enum Name {
-      STANDALONE, RATIS, REST
+      STANDALONE, RATIS, REST, REPLICATION
     }
 
     private Name name;
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfig.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfig.java
index e9c283d..7a144d8 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfig.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfig.java
@@ -79,7 +79,7 @@
 
   @Config(key = "block.deletion.per-interval.max",
       type = ConfigType.INT,
-      defaultValue = "10000",
+      defaultValue = "20000",
       tags = { ConfigTag.SCM, ConfigTag.DELETION},
       description =
           "Maximum number of blocks which SCM processes during an interval. "
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index 7b01e07..e5958b7 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -239,7 +239,7 @@
   public static final String OZONE_SCM_HEARTBEAT_RPC_TIMEOUT =
       "ozone.scm.heartbeat.rpc-timeout";
   public static final String OZONE_SCM_HEARTBEAT_RPC_TIMEOUT_DEFAULT =
-      "1s";
+      "5s";
 
   public static final String OZONE_SCM_HEARTBEAT_RPC_RETRY_COUNT =
       "ozone.scm.heartbeat.rpc-retry-count";
@@ -298,7 +298,7 @@
   // Pipeline placement policy:
   // Upper limit for how many pipelines a datanode can engage in.
   public static final String OZONE_DATANODE_PIPELINE_LIMIT =
-          "ozone.datanode.pipeline.limit";
+          "ozone.scm.datanode.pipeline.limit";
   public static final int OZONE_DATANODE_PIPELINE_LIMIT_DEFAULT = 2;
 
   // Upper limit for how many pipelines can be created
@@ -364,6 +364,11 @@
   public static final String HDDS_TRACING_ENABLED = "hdds.tracing.enabled";
   public static final boolean HDDS_TRACING_ENABLED_DEFAULT = false;
 
+  public static final String OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL =
+      "ozone.scm.datanode.admin.monitor.interval";
+  public static final String OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL_DEFAULT =
+      "30s";
+
   /**
    * Never constructed.
    */
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
index 84831c1..bab99b4 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
@@ -145,15 +145,52 @@
       String owner) throws IOException;
 
   /**
-   * Returns a set of Nodes that meet a query criteria.
-   * @param nodeStatuses - Criteria that we want the node to have.
+   * Returns a set of Nodes that meet a query criteria. Passing null for opState
+   * or nodeState acts like a wild card, returning all nodes in that state.
+   * @param opState - Operational State of the node, eg IN_SERVICE,
+   *                DECOMMISSIONED, etc
+   * @param nodeState - Health of the nodeCriteria that we want the node to
+   *                  have, eg HEALTHY, STALE etc
    * @param queryScope - Query scope - Cluster or pool.
    * @param poolName - if it is pool, a pool name is required.
    * @return A set of nodes that meet the requested criteria.
    * @throws IOException
    */
-  List<HddsProtos.Node> queryNode(HddsProtos.NodeState nodeStatuses,
-      HddsProtos.QueryScope queryScope, String poolName) throws IOException;
+  List<HddsProtos.Node> queryNode(HddsProtos.NodeOperationalState opState,
+      HddsProtos.NodeState nodeState, HddsProtos.QueryScope queryScope,
+      String poolName) throws IOException;
+
+  /**
+   * Allows a list of hosts to be decommissioned. The hosts are identified
+   * by their hostname and optionally port in the format foo.com:port.
+   * @param hosts A list of hostnames, optionally with port
+   * @throws IOException
+   */
+  void decommissionNodes(List<String> hosts) throws IOException;
+
+  /**
+   * Allows a list of hosts in maintenance or decommission states to be placed
+   * back in service. The hosts are identified by their hostname and optionally
+   * port in the format foo.com:port.
+   * @param hosts A list of hostnames, optionally with port
+   * @throws IOException
+   */
+  void recommissionNodes(List<String> hosts) throws IOException;
+
+  /**
+   * Place the list of datanodes into maintenance mode. If a non-zero endDtm
+   * is passed, the hosts will automatically exit maintenance mode after the
+   * given time has passed. Passing an end time of zero means the hosts will
+   * remain in maintenance indefinitely.
+   * The hosts are identified by their hostname and optionally port in the
+   * format foo.com:port.
+   * @param hosts A list of hostnames, optionally with port
+   * @param endHours The number of hours from now which maintenance will end or
+   *                 zero if maintenance must be manually ended.
+   * @throws IOException
+   */
+  void startMaintenanceNodes(List<String> hosts, int endHours)
+      throws IOException;
 
   /**
    * Creates a specified replication pipeline.
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
index 94ef442..3739ed3 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
@@ -120,12 +120,22 @@
   void deleteContainer(long containerID) throws IOException;
 
   /**
-   *  Queries a list of Node Statuses.
-   * @param state
+   *  Queries a list of Node Statuses. Passing a null for either opState or
+   *  state acts like a wildcard returning all nodes in that state.
+   * @param opState The node operational state
+   * @param state The node health
    * @return List of Datanodes.
    */
-  List<HddsProtos.Node> queryNode(HddsProtos.NodeState state,
-      HddsProtos.QueryScope queryScope, String poolName) throws IOException;
+  List<HddsProtos.Node> queryNode(HddsProtos.NodeOperationalState opState,
+      HddsProtos.NodeState state, HddsProtos.QueryScope queryScope,
+      String poolName) throws IOException;
+
+  void decommissionNodes(List<String> nodes) throws IOException;
+
+  void recommissionNodes(List<String> nodes) throws IOException;
+
+  void startMaintenanceNodes(List<String> nodes, int endInHours)
+      throws IOException;
 
   /**
    * Close a container.
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index 81470b2..591bf3a 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
@@ -23,6 +23,8 @@
 import org.apache.ratis.thirdparty.io.grpc.Context;
 import org.apache.ratis.thirdparty.io.grpc.Metadata;
 
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 import java.util.regex.Pattern;
 
 import static org.apache.ratis.thirdparty.io.grpc.Metadata.ASCII_STRING_MARSHALLER;
@@ -253,11 +255,15 @@
   // versions, requiring this property to be tracked on a per container basis.
   // V1: All data in default column family.
   public static final String SCHEMA_V1 = "1";
-  // V2: Metadata, block data, and deleted blocks in their own column families.
+  // V2: Metadata, block data, and delete transactions in their own
+  // column families.
   public static final String SCHEMA_V2 = "2";
   // Most recent schema version that all new containers should be created with.
   public static final String SCHEMA_LATEST = SCHEMA_V2;
 
+  public static final String[] SCHEMA_VERSIONS =
+      new String[] {SCHEMA_V1, SCHEMA_V2};
+
   // Supported store types.
   public static final String OZONE = "ozone";
   public static final String S3 = "s3";
@@ -269,8 +275,9 @@
   public static final String SRC_KEY = "srcKey";
   public static final String DST_KEY = "dstKey";
   public static final String USED_BYTES = "usedBytes";
+  public static final String USED_NAMESPACE = "usedNamespace";
   public static final String QUOTA_IN_BYTES = "quotaInBytes";
-  public static final String QUOTA_IN_COUNTS = "quotaInCounts";
+  public static final String QUOTA_IN_NAMESPACE = "quotaInNamespace";
   public static final String OBJECT_ID = "objectID";
   public static final String UPDATE_ID = "updateID";
   public static final String CLIENT_ID = "clientID";
@@ -352,7 +359,7 @@
   public static final String GDPR_FLAG = "gdprEnabled";
   public static final String GDPR_ALGORITHM_NAME = "AES";
   public static final int GDPR_DEFAULT_RANDOM_SECRET_LENGTH = 16;
-  public static final String GDPR_CHARSET = "UTF-8";
+  public static final Charset GDPR_CHARSET = StandardCharsets.UTF_8;
   public static final String GDPR_LENGTH = "length";
   public static final String GDPR_SECRET = "secret";
   public static final String GDPR_ALGORITHM = "algorithm";
@@ -383,6 +390,10 @@
   // An on-disk transient marker file used when replacing DB with checkpoint
   public static final String DB_TRANSIENT_MARKER = "dbInconsistentMarker";
 
+  public static final String OM_RATIS_SNAPSHOT_DIR = "snapshot";
+
+  public static final long DEFAULT_OM_UPDATE_ID = -1L;  
+
   // An on-disk marker file used to indicate that the OM is in prepare and
   // should remain prepared even after a restart.
   public static final String PREPARE_MARKER = "prepareMarker";
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/LeaseManager.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/LeaseManager.java
index 5e52b40..68ae49b 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/LeaseManager.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/lease/LeaseManager.java
@@ -17,9 +17,6 @@
 
 package org.apache.hadoop.ozone.lease;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.Callable;
@@ -28,6 +25,8 @@
 import java.util.concurrent.Executors;
 
 import static org.apache.hadoop.ozone.lease.Lease.messageForResource;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * LeaseManager is someone who can provide you leases based on your
@@ -46,6 +45,7 @@
 
   private final String name;
   private final long defaultTimeout;
+  private final Object monitor = new Object();
   private Map<T, Lease<T>> activeLeases;
   private LeaseMonitor leaseMonitor;
   private Thread leaseMonitorThread;
@@ -115,12 +115,14 @@
     if (LOG.isDebugEnabled()) {
       LOG.debug("Acquiring lease on {} for {} milliseconds", resource, timeout);
     }
-    if(activeLeases.containsKey(resource)) {
+    if (activeLeases.containsKey(resource)) {
       throw new LeaseAlreadyExistException(messageForResource(resource));
     }
     Lease<T> lease = new Lease<>(resource, timeout);
     activeLeases.put(resource, lease);
-    leaseMonitorThread.interrupt();
+    synchronized (monitor) {
+      monitor.notifyAll();
+    }
     return lease;
   }
 
@@ -135,7 +137,7 @@
   public Lease<T> get(T resource) throws LeaseNotFoundException {
     checkStatus();
     Lease<T> lease = activeLeases.get(resource);
-    if(lease != null) {
+    if (lease != null) {
       return lease;
     }
     throw new LeaseNotFoundException(messageForResource(resource));
@@ -156,7 +158,7 @@
       LOG.debug("Releasing lease on {}", resource);
     }
     Lease<T> lease = activeLeases.remove(resource);
-    if(lease == null) {
+    if (lease == null) {
       throw new LeaseNotFoundException(messageForResource(resource));
     }
     lease.invalidate();
@@ -171,11 +173,13 @@
     checkStatus();
     LOG.debug("Shutting down LeaseManager service");
     leaseMonitor.disable();
-    leaseMonitorThread.interrupt();
-    for(T resource : activeLeases.keySet()) {
+    synchronized (monitor) {
+      monitor.notifyAll();
+    }
+    for (T resource : activeLeases.keySet()) {
       try {
         release(resource);
-      }  catch(LeaseNotFoundException ex) {
+      } catch (LeaseNotFoundException ex) {
         //Ignore the exception, someone might have released the lease
       }
     }
@@ -187,7 +191,7 @@
    * running.
    */
   private void checkStatus() {
-    if(!isRunning) {
+    if (!isRunning) {
       throw new LeaseManagerNotRunningException("LeaseManager not running.");
     }
   }
@@ -198,8 +202,8 @@
    */
   private final class LeaseMonitor implements Runnable {
 
-    private volatile boolean monitor = true;
     private final ExecutorService executorService;
+    private volatile boolean running = true;
 
     private LeaseMonitor() {
       this.executorService = Executors.newCachedThreadPool();
@@ -207,7 +211,7 @@
 
     @Override
     public void run() {
-      while (monitor) {
+      while (running) {
         LOG.debug("{}-LeaseMonitor: checking for lease expiry", name);
         long sleepTime = Long.MAX_VALUE;
 
@@ -230,12 +234,12 @@
         }
 
         try {
-          if(!Thread.interrupted()) {
-            Thread.sleep(sleepTime);
+          synchronized (monitor) {
+            monitor.wait(sleepTime);
           }
         } catch (InterruptedException e) {
           // This means a new lease is added to activeLeases.
-          LOG.error("Execution was interrupted ", e);
+          LOG.warn("Lease manager is interrupted. Shutting down...", e);
           Thread.currentThread().interrupt();
         }
       }
@@ -246,7 +250,7 @@
      * will stop lease monitor.
      */
     public void disable() {
-      monitor = false;
+      running = false;
     }
   }
 
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index d8402f7..e20d416 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -394,7 +394,7 @@
   </property>
   <property>
     <name>ozone.key.deleting.limit.per.task</name>
-    <value>1000</value>
+    <value>20000</value>
     <tag>OM, PERFORMANCE</tag>
     <description>
       A maximum number of keys to be scanned by key deleting service
@@ -776,10 +776,12 @@
     </description>
   </property>
   <property>
-  <name>ozone.datanode.pipeline.limit</name>
+  <name>ozone.scm.datanode.pipeline.limit</name>
   <value>2</value>
   <tag>OZONE, SCM, PIPELINE</tag>
   <description>Max number of pipelines per datanode can be engaged in.
+    Setting the value to 0 means the pipeline limit per dn will be determined
+    by the no of metadata volumes reported per dn.
   </description>
   </property>
   <property>
@@ -807,7 +809,7 @@
   <property>
     <name>ozone.scm.pipeline.leader-choose.policy</name>
     <value>
-      org.apache.hadoop.hdds.scm.pipeline.leader.choose.algorithms.DefaultLeaderChoosePolicy
+      org.apache.hadoop.hdds.scm.pipeline.leader.choose.algorithms.MinLeaderCountChoosePolicy
     </value>
     <tag>OZONE, SCM, PIPELINE</tag>
     <description>
@@ -943,7 +945,7 @@
   </property>
   <property>
     <name>ozone.scm.heartbeat.rpc-timeout</name>
-    <value>1s</value>
+    <value>5s</value>
     <tag>OZONE, MANAGEMENT</tag>
     <description>
       Timeout value for the RPC from Datanode to SCM.
@@ -1523,7 +1525,7 @@
 
   <property>
     <name>ozone.om.ratis.enable</name>
-    <value>false</value>
+    <value>true</value>
     <tag>OZONE, OM, RATIS, MANAGEMENT</tag>
     <description>Property to enable or disable Ratis server on OM.
     Please note - this is a temporary property to disable OM Ratis server.
@@ -1644,24 +1646,13 @@
   </property>
 
   <property>
-    <name>ozone.om.ratis.server.role.check.interval</name>
-    <value>15s</value>
-    <tag>OZONE, OM, RATIS, MANAGEMENT</tag>
-    <description>The interval between OM leader performing a role
-      check on its ratis server. Ratis server informs OM if it
-      loses the leader role. The scheduled check is an secondary
-      check to ensure that the leader role is updated periodically
-      .</description>
-  </property>
-
-  <property>
     <name>ozone.om.ratis.snapshot.dir</name>
     <value/>
     <tag>OZONE, OM, STORAGE, MANAGEMENT, RATIS</tag>
     <description>This directory is used for storing OM's snapshot
       related files like the ratisSnapshotIndex and DB checkpoint from leader
       OM.
-      If undefined, OM snapshot dir will fallback to ozone.om.ratis.storage.dir.
+      If undefined, OM snapshot dir will fallback to ozone.metadata.dirs.
       This fallback approach is not recommended for production environments.
     </description>
   </property>
@@ -2336,6 +2327,17 @@
     </description>
   </property>
   <property>
+    <name>ozone.scm.datanode.admin.monitor.interval</name>
+    <value>30s</value>
+    <tag>SCM</tag>
+    <description>
+      This sets how frequently the datanode admin monitor runs to check for
+      nodes added to the admin workflow or removed from it. The progress
+      of decommissioning and entering maintenance nodes is also checked to see
+      if they have completed.
+    </description>
+  </property>
+  <property>
     <name>ozone.client.list.trash.keys.max</name>
     <value>1000</value>
     <tag>OZONE, CLIENT</tag>
diff --git a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/protocol/MockDatanodeDetails.java b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/protocol/MockDatanodeDetails.java
index 06a1bf0..41ae6ec 100644
--- a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/protocol/MockDatanodeDetails.java
+++ b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/protocol/MockDatanodeDetails.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdds.protocol;
 
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
 import java.io.IOException;
 import java.net.ServerSocket;
 import java.util.Random;
@@ -101,6 +103,8 @@
         .addPort(ratisPort)
         .addPort(restPort)
         .setNetworkLocation(networkLocation)
+        .setPersistedOpState(HddsProtos.NodeOperationalState.IN_SERVICE)
+        .setPersistedOpStateExpiry(0)
         .build();
   }
 
diff --git a/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileAppender.java b/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileAppender.java
index 9f1c087..4256ac8 100644
--- a/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileAppender.java
+++ b/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileAppender.java
@@ -28,6 +28,7 @@
 import javax.xml.transform.stream.StreamResult;
 import java.io.InputStream;
 import java.io.Writer;
+import java.nio.charset.StandardCharsets;
 import java.util.Arrays;
 import java.util.stream.Collectors;
 
@@ -117,7 +118,8 @@
       factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);
       Transformer transformer = factory.newTransformer();
 
-      transformer.setOutputProperty(OutputKeys.ENCODING, "UTF-8");
+      transformer.setOutputProperty(OutputKeys.ENCODING,
+              StandardCharsets.UTF_8.name());
       transformer.setOutputProperty(OutputKeys.INDENT, "yes");
       transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount",
           "2");
diff --git a/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java b/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java
index f3d71be..a4d7dc8 100644
--- a/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java
+++ b/hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java
@@ -67,6 +67,7 @@
     try {
 
       //load existing generated config (if exists)
+      boolean resourceExists = true;
       ConfigFileAppender appender = new ConfigFileAppender();
       try (InputStream input = filer
           .getResource(StandardLocation.CLASS_OUTPUT, "",
@@ -74,6 +75,7 @@
         appender.load(input);
       } catch (FileNotFoundException | NoSuchFileException ex) {
         appender.init();
+        resourceExists = false;
       }
 
       Set<? extends Element> annotatedElements =
@@ -100,15 +102,16 @@
         }
 
       }
-      FileObject resource = filer
-          .createResource(StandardLocation.CLASS_OUTPUT, "",
-              OUTPUT_FILE_NAME);
+      if (!resourceExists) {
+        FileObject resource = filer
+            .createResource(StandardLocation.CLASS_OUTPUT, "",
+                OUTPUT_FILE_NAME);
 
-      try (Writer writer = new OutputStreamWriter(
-          resource.openOutputStream(), StandardCharsets.UTF_8)) {
-        appender.write(writer);
+        try (Writer writer = new OutputStreamWriter(
+            resource.openOutputStream(), StandardCharsets.UTF_8)) {
+          appender.write(writer);
+        }
       }
-
     } catch (IOException e) {
       processingEnv.getMessager().printMessage(Kind.ERROR,
           "Can't generate the config file from annotation: " + e);
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
index cfb22e3..2d1d4e3 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.ozone;
 
+import javax.management.ObjectName;
 import java.io.File;
 import java.io.IOException;
 import java.net.InetAddress;
@@ -27,10 +28,9 @@
 import java.util.List;
 import java.util.Map;
 import java.util.UUID;
-import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicBoolean;
 
-import com.sun.jmx.mbeanserver.Introspector;
 import org.apache.hadoop.conf.Configurable;
 import org.apache.hadoop.hdds.DFSConfigKeysLegacy;
 import org.apache.hadoop.hdds.HddsUtils;
@@ -42,7 +42,6 @@
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMGetCertResponseProto;
 import org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolClientSideTranslatorPB;
-import org.apache.hadoop.hdds.utils.HddsServerUtil;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.security.x509.SecurityConfig;
 import org.apache.hadoop.hdds.security.x509.certificate.client.CertificateClient;
@@ -50,6 +49,7 @@
 import org.apache.hadoop.hdds.security.x509.certificates.utils.CertificateSignRequest;
 import org.apache.hadoop.hdds.server.http.RatisDropwizardExports;
 import org.apache.hadoop.hdds.tracing.TracingUtil;
+import org.apache.hadoop.hdds.utils.HddsServerUtil;
 import org.apache.hadoop.hdds.utils.HddsVersionInfo;
 import org.apache.hadoop.metrics2.util.MBeans;
 import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
@@ -61,22 +61,20 @@
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authentication.client.AuthenticationException;
 import org.apache.hadoop.util.ServicePlugin;
+import org.apache.hadoop.util.Time;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
+import com.sun.jmx.mbeanserver.Introspector;
 import static org.apache.hadoop.hdds.security.x509.certificate.utils.CertificateCodec.getX509Certificate;
 import static org.apache.hadoop.hdds.security.x509.certificates.utils.CertificateSignRequest.getEncodedString;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.HDDS_DATANODE_PLUGINS_KEY;
 import static org.apache.hadoop.util.ExitUtil.terminate;
-
-import org.apache.hadoop.util.Time;
 import org.bouncycastle.pkcs.PKCS10CertificationRequest;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import picocli.CommandLine.Command;
 
-import javax.management.ObjectName;
-
 /**
  * Datanode service plugin to start the HDDS container services.
  */
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
index d9f3221..b53fe7e 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
@@ -21,6 +21,8 @@
 import static org.apache.commons.io.FilenameUtils.removeExtension;
 import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.CONTAINER_CHECKSUM_ERROR;
 import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.NO_SUCH_ALGORITHM;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.CLOSED_CONTAINER_IO;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.CONTAINER_NOT_OPEN;
 import static org.apache.hadoop.hdds.scm.protocolPB.ContainerCommandResponseBuilders.getContainerCommandResponse;
 import static org.apache.hadoop.ozone.container.common.impl.ContainerData.CHARSET_ENCODING;
 
@@ -77,8 +79,16 @@
       ContainerCommandRequestProto request) {
     String logInfo = "Operation: {} , Trace ID: {} , Message: {} , " +
         "Result: {} , StorageContainerException Occurred.";
-    log.info(logInfo, request.getCmdType(), request.getTraceID(),
-        ex.getMessage(), ex.getResult().getValueDescriptor().getName(), ex);
+    if (ex.getResult() == CLOSED_CONTAINER_IO ||
+        ex.getResult() == CONTAINER_NOT_OPEN) {
+      if (log.isDebugEnabled()) {
+        log.debug(logInfo, request.getCmdType(), request.getTraceID(),
+            ex.getMessage(), ex.getResult().getValueDescriptor().getName(), ex);
+      }
+    } else {
+      log.info(logInfo, request.getCmdType(), request.getTraceID(),
+          ex.getMessage(), ex.getResult().getValueDescriptor().getName(), ex);
+    }
     return getContainerCommandResponse(request, ex.getResult(), ex.getMessage())
         .build();
   }
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/DatanodeIdYaml.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/DatanodeIdYaml.java
index 44a12c2..3b14641 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/DatanodeIdYaml.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/DatanodeIdYaml.java
@@ -23,6 +23,7 @@
 import java.io.IOException;
 import java.io.OutputStreamWriter;
 import java.io.Writer;
+import java.nio.charset.StandardCharsets;
 import java.util.LinkedHashMap;
 import java.util.Map;
 import java.util.UUID;
@@ -30,6 +31,7 @@
 import org.apache.commons.collections.CollectionUtils;
 import org.apache.commons.collections.MapUtils;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.yaml.snakeyaml.DumperOptions;
 import org.yaml.snakeyaml.Yaml;
 
@@ -57,7 +59,7 @@
     Yaml yaml = new Yaml(options);
 
     try (Writer writer = new OutputStreamWriter(
-        new FileOutputStream(path), "UTF-8")) {
+        new FileOutputStream(path), StandardCharsets.UTF_8)) {
       yaml.dump(getDatanodeDetailsYaml(datanodeDetails), writer);
     }
   }
@@ -83,6 +85,12 @@
           .setIpAddress(datanodeDetailsYaml.getIpAddress())
           .setHostName(datanodeDetailsYaml.getHostName())
           .setCertSerialId(datanodeDetailsYaml.getCertSerialId());
+      if (datanodeDetailsYaml.getPersistedOpState() != null) {
+        builder.setPersistedOpState(HddsProtos.NodeOperationalState.valueOf(
+            datanodeDetailsYaml.getPersistedOpState()));
+      }
+      builder.setPersistedOpStateExpiry(
+          datanodeDetailsYaml.getPersistedOpStateExpiryEpochSec());
 
       if (!MapUtils.isEmpty(datanodeDetailsYaml.getPortDetails())) {
         for (Map.Entry<String, Integer> portEntry :
@@ -106,6 +114,8 @@
     private String ipAddress;
     private String hostName;
     private String certSerialId;
+    private String persistedOpState;
+    private long persistedOpStateExpiryEpochSec = 0;
     private Map<String, Integer> portDetails;
 
     public DatanodeDetailsYaml() {
@@ -114,11 +124,15 @@
 
     private DatanodeDetailsYaml(String uuid, String ipAddress,
                                 String hostName, String certSerialId,
+                                String persistedOpState,
+                                long persistedOpStateExpiryEpochSec,
                                 Map<String, Integer> portDetails) {
       this.uuid = uuid;
       this.ipAddress = ipAddress;
       this.hostName = hostName;
       this.certSerialId = certSerialId;
+      this.persistedOpState = persistedOpState;
+      this.persistedOpStateExpiryEpochSec = persistedOpStateExpiryEpochSec;
       this.portDetails = portDetails;
     }
 
@@ -138,6 +152,14 @@
       return certSerialId;
     }
 
+    public String getPersistedOpState() {
+      return persistedOpState;
+    }
+
+    public long getPersistedOpStateExpiryEpochSec() {
+      return persistedOpStateExpiryEpochSec;
+    }
+
     public Map<String, Integer> getPortDetails() {
       return portDetails;
     }
@@ -158,6 +180,14 @@
       this.certSerialId = certSerialId;
     }
 
+    public void setPersistedOpState(String persistedOpState) {
+      this.persistedOpState = persistedOpState;
+    }
+
+    public void setPersistedOpStateExpiryEpochSec(long opStateExpiryEpochSec) {
+      this.persistedOpStateExpiryEpochSec = opStateExpiryEpochSec;
+    }
+
     public void setPortDetails(Map<String, Integer> portDetails) {
       this.portDetails = portDetails;
     }
@@ -173,11 +203,17 @@
       }
     }
 
+    String persistedOpString = null;
+    if (datanodeDetails.getPersistedOpState() != null) {
+      persistedOpString = datanodeDetails.getPersistedOpState().name();
+    }
     return new DatanodeDetailsYaml(
         datanodeDetails.getUuid().toString(),
         datanodeDetails.getIpAddress(),
         datanodeDetails.getHostName(),
         datanodeDetails.getCertSerialId(),
+        persistedOpString,
+        datanodeDetails.getPersistedOpStateExpiryEpochSec(),
         portDetails);
   }
 }
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
index ba34a29..19cc1e2 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
@@ -291,7 +291,7 @@
    * @return - boolean
    */
   public synchronized boolean isValid() {
-    return !(ContainerDataProto.State.INVALID == state);
+    return ContainerDataProto.State.INVALID != state;
   }
 
   /**
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java
index 74cbbc0..757d7e8 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java
@@ -18,7 +18,6 @@
 
 package org.apache.hadoop.ozone.container.common.impl;
 
-import java.beans.IntrospectionException;
 import java.io.ByteArrayInputStream;
 import java.io.File;
 import java.io.FileInputStream;
@@ -201,8 +200,7 @@
    */
   private static class ContainerDataRepresenter extends Representer {
     @Override
-    protected Set<Property> getProperties(Class<? extends Object> type)
-        throws IntrospectionException {
+    protected Set<Property> getProperties(Class<? extends Object> type) {
       Set<Property> set = super.getProperties(type);
       Set<Property> filtered = new TreeSet<Property>();
 
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
index 685a1d9..5d181ec 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
@@ -69,7 +69,7 @@
   public void run() {
     publishReport();
     if (!executor.isShutdown() &&
-        !(context.getState() == DatanodeStates.SHUTDOWN)) {
+        (context.getState() != DatanodeStates.SHUTDOWN)) {
       executor.schedule(this,
           getReportFrequency(), TimeUnit.MILLISECONDS);
     }
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
index c0e57e8..d0034df 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
@@ -51,6 +51,7 @@
 import org.apache.hadoop.ozone.container.common.statemachine.commandhandler.DeleteContainerCommandHandler;
 import org.apache.hadoop.ozone.container.common.statemachine.commandhandler.FinalizeNewLayoutVersionCommandHandler;
 import org.apache.hadoop.ozone.container.common.statemachine.commandhandler.ReplicateContainerCommandHandler;
+import org.apache.hadoop.ozone.container.common.statemachine.commandhandler.SetNodeOperationalStateCommandHandler;
 import org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker;
 import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
 import org.apache.hadoop.ozone.container.replication.ContainerReplicator;
@@ -178,6 +179,7 @@
         .addHandler(new ClosePipelineCommandHandler())
         .addHandler(new CreatePipelineCommandHandler(conf))
         .addHandler(new FinalizeNewLayoutVersionCommandHandler())
+        .addHandler(new SetNodeOperationalStateCommandHandler(conf))
         .setConnectionManager(connectionManager)
         .setContainer(container)
         .setContext(context)
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
index 4cd769f..f87561a 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
@@ -32,15 +32,23 @@
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;
 import java.util.function.Consumer;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.Sets;
+import com.google.protobuf.Descriptors.Descriptor;
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerAction;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.IncrementalContainerReportProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.NodeReportProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.PipelineAction;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.PipelineReportsProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMCommandProto;
 import org.apache.hadoop.ozone.container.common.states.DatanodeState;
 import org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState;
@@ -63,6 +71,27 @@
  * Current Context of State Machine.
  */
 public class StateContext {
+
+  @VisibleForTesting
+  final static String CONTAINER_REPORTS_PROTO_NAME =
+      ContainerReportsProto.getDescriptor().getFullName();
+  @VisibleForTesting
+  final static String NODE_REPORT_PROTO_NAME =
+      NodeReportProto.getDescriptor().getFullName();
+  @VisibleForTesting
+  final static String PIPELINE_REPORTS_PROTO_NAME =
+      PipelineReportsProto.getDescriptor().getFullName();
+  @VisibleForTesting
+  final static String COMMAND_STATUS_REPORTS_PROTO_NAME =
+      CommandStatusReportsProto.getDescriptor().getFullName();
+  @VisibleForTesting
+  final static String INCREMENTAL_CONTAINER_REPORT_PROTO_NAME =
+      IncrementalContainerReportProto.getDescriptor().getFullName();
+  // Accepted types of reports that can be queued to incrementalReportsQueue
+  private final static Set<String> ACCEPTED_INCREMENTAL_REPORT_TYPE_SET =
+      Sets.newHashSet(COMMAND_STATUS_REPORTS_PROTO_NAME,
+          INCREMENTAL_CONTAINER_REPORT_PROTO_NAME);
+
   static final Logger LOG =
       LoggerFactory.getLogger(StateContext.class);
   private final Queue<SCMCommand> commandQueue;
@@ -72,7 +101,13 @@
   private final AtomicLong stateExecutionCount;
   private final ConfigurationSource conf;
   private final Set<InetSocketAddress> endpoints;
-  private final Map<InetSocketAddress, List<GeneratedMessage>> reports;
+  // Only the latest full report of each type is kept
+  private final AtomicReference<GeneratedMessage> containerReports;
+  private final AtomicReference<GeneratedMessage> nodeReport;
+  private final AtomicReference<GeneratedMessage> pipelineReports;
+  // Incremental reports are queued in the map below
+  private final Map<InetSocketAddress, List<GeneratedMessage>>
+      incrementalReportsQueue;
   private final Map<InetSocketAddress, Queue<ContainerAction>> containerActions;
   private final Map<InetSocketAddress, Queue<PipelineAction>> pipelineActions;
   private DatanodeStateMachine.DatanodeStates state;
@@ -102,7 +137,10 @@
     this.parent = parent;
     commandQueue = new LinkedList<>();
     cmdStatusMap = new ConcurrentHashMap<>();
-    reports = new HashMap<>();
+    incrementalReportsQueue = new HashMap<>();
+    containerReports = new AtomicReference<>();
+    nodeReport = new AtomicReference<>();
+    pipelineReports = new AtomicReference<>();
     endpoints = new HashSet<>();
     containerActions = new HashMap<>();
     pipelineActions = new HashMap<>();
@@ -190,17 +228,34 @@
   public boolean getShutdownOnError() {
     return shutdownOnError;
   }
+
   /**
    * Adds the report to report queue.
    *
    * @param report report to be added
    */
   public void addReport(GeneratedMessage report) {
-    if (report != null) {
-      synchronized (reports) {
-        for (InetSocketAddress endpoint : endpoints) {
-          reports.get(endpoint).add(report);
+    if (report == null) {
+      return;
+    }
+    final Descriptor descriptor = report.getDescriptorForType();
+    Preconditions.checkState(descriptor != null);
+    final String reportType = descriptor.getFullName();
+    Preconditions.checkState(reportType != null);
+    for (InetSocketAddress endpoint : endpoints) {
+      if (reportType.equals(CONTAINER_REPORTS_PROTO_NAME)) {
+        containerReports.set(report);
+      } else if (reportType.equals(NODE_REPORT_PROTO_NAME)) {
+        nodeReport.set(report);
+      } else if (reportType.equals(PIPELINE_REPORTS_PROTO_NAME)) {
+        pipelineReports.set(report);
+      } else if (ACCEPTED_INCREMENTAL_REPORT_TYPE_SET.contains(reportType)) {
+        synchronized (incrementalReportsQueue) {
+          incrementalReportsQueue.get(endpoint).add(report);
         }
+      } else {
+        throw new IllegalArgumentException(
+            "Unidentified report message type: " + reportType);
       }
     }
   }
@@ -214,9 +269,24 @@
    */
   public void putBackReports(List<GeneratedMessage> reportsToPutBack,
                              InetSocketAddress endpoint) {
-    synchronized (reports) {
-      if (reports.containsKey(endpoint)){
-        reports.get(endpoint).addAll(0, reportsToPutBack);
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("endpoint: {}, size of reportsToPutBack: {}",
+          endpoint, reportsToPutBack.size());
+    }
+    // We don't expect too much reports to be put back
+    for (GeneratedMessage report : reportsToPutBack) {
+      final Descriptor descriptor = report.getDescriptorForType();
+      Preconditions.checkState(descriptor != null);
+      final String reportType = descriptor.getFullName();
+      Preconditions.checkState(reportType != null);
+      if (!ACCEPTED_INCREMENTAL_REPORT_TYPE_SET.contains(reportType)) {
+        throw new IllegalArgumentException(
+            "Unaccepted report message type: " + reportType);
+      }
+    }
+    synchronized (incrementalReportsQueue) {
+      if (incrementalReportsQueue.containsKey(endpoint)){
+        incrementalReportsQueue.get(endpoint).addAll(0, reportsToPutBack);
       }
     }
   }
@@ -232,6 +302,22 @@
     return getReports(endpoint, Integer.MAX_VALUE);
   }
 
+  List<GeneratedMessage> getIncrementalReports(
+      InetSocketAddress endpoint, int maxLimit) {
+    List<GeneratedMessage> reportsToReturn = new LinkedList<>();
+    synchronized (incrementalReportsQueue) {
+      List<GeneratedMessage> reportsForEndpoint =
+          incrementalReportsQueue.get(endpoint);
+      if (reportsForEndpoint != null) {
+        List<GeneratedMessage> tempList = reportsForEndpoint.subList(
+            0, min(reportsForEndpoint.size(), maxLimit));
+        reportsToReturn.addAll(tempList);
+        tempList.clear();
+      }
+    }
+    return reportsToReturn;
+  }
+
   /**
    * Returns available reports from the report queue with a max limit on
    * list size, or empty list if the queue is empty.
@@ -240,15 +326,19 @@
    */
   public List<GeneratedMessage> getReports(InetSocketAddress endpoint,
                                            int maxLimit) {
-    List<GeneratedMessage> reportsToReturn = new LinkedList<>();
-    synchronized (reports) {
-      List<GeneratedMessage> reportsForEndpoint = reports.get(endpoint);
-      if (reportsForEndpoint != null) {
-        List<GeneratedMessage> tempList = reportsForEndpoint.subList(
-            0, min(reportsForEndpoint.size(), maxLimit));
-        reportsToReturn.addAll(tempList);
-        tempList.clear();
-      }
+    List<GeneratedMessage> reportsToReturn =
+        getIncrementalReports(endpoint, maxLimit);
+    GeneratedMessage report = containerReports.get();
+    if (report != null) {
+      reportsToReturn.add(report);
+    }
+    report = nodeReport.get();
+    if (report != null) {
+      reportsToReturn.add(report);
+    }
+    report = pipelineReports.get();
+    if (report != null) {
+      reportsToReturn.add(report);
     }
     return reportsToReturn;
   }
@@ -580,7 +670,22 @@
       this.endpoints.add(endpoint);
       this.containerActions.put(endpoint, new LinkedList<>());
       this.pipelineActions.put(endpoint, new LinkedList<>());
-      this.reports.put(endpoint, new LinkedList<>());
+      this.incrementalReportsQueue.put(endpoint, new LinkedList<>());
     }
   }
+
+  @VisibleForTesting
+  public GeneratedMessage getContainerReports() {
+    return containerReports.get();
+  }
+
+  @VisibleForTesting
+  public GeneratedMessage getNodeReport() {
+    return nodeReport.get();
+  }
+
+  @VisibleForTesting
+  public GeneratedMessage getPipelineReports() {
+    return pipelineReports.get();
+  }
 }
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
index 91ab4c9..10e6797 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
@@ -42,6 +42,8 @@
 import org.apache.hadoop.ozone.container.common.statemachine
     .SCMConnectionManager;
 import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStore;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStoreSchemaTwoImpl;
 import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
 import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
 import org.apache.hadoop.ozone.protocol.commands.DeleteBlockCommandStatus;
@@ -59,6 +61,8 @@
 
 import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
     .Result.CONTAINER_NOT_FOUND;
+import static org.apache.hadoop.ozone.OzoneConsts.SCHEMA_V1;
+import static org.apache.hadoop.ozone.OzoneConsts.SCHEMA_V2;
 
 /**
  * Handle block deletion commands.
@@ -116,6 +120,7 @@
             DeleteBlockTransactionResult.newBuilder();
         txResultBuilder.setTxID(entry.getTxID());
         long containerId = entry.getContainerID();
+        int newDeletionBlocks = 0;
         try {
           Container cont = containerSet.getContainer(containerId);
           if (cont == null) {
@@ -129,7 +134,16 @@
                 cont.getContainerData();
             cont.writeLock();
             try {
-              deleteKeyValueContainerBlocks(containerData, entry);
+              if (containerData.getSchemaVersion().equals(SCHEMA_V1)) {
+                markBlocksForDeletionSchemaV1(containerData, entry);
+              } else if (containerData.getSchemaVersion().equals(SCHEMA_V2)) {
+                markBlocksForDeletionSchemaV2(containerData, entry,
+                    newDeletionBlocks, entry.getTxID());
+              } else {
+                throw new UnsupportedOperationException(
+                    "Only schema version 1 and schema version 2 are "
+                        + "supported.");
+              }
             } finally {
               cont.writeUnlock();
             }
@@ -187,10 +201,126 @@
    * @param delTX a block deletion transaction.
    * @throws IOException if I/O error occurs.
    */
-  private void deleteKeyValueContainerBlocks(
+
+  private void markBlocksForDeletionSchemaV2(
+      KeyValueContainerData containerData, DeletedBlocksTransaction delTX,
+      int newDeletionBlocks, long txnID) throws IOException {
+    long containerId = delTX.getContainerID();
+    if (!isTxnIdValid(containerId, containerData, delTX)) {
+      return;
+    }
+    try (ReferenceCountedDB containerDB = BlockUtils
+        .getDB(containerData, conf)) {
+      DatanodeStore ds = containerDB.getStore();
+      DatanodeStoreSchemaTwoImpl dnStoreTwoImpl =
+          (DatanodeStoreSchemaTwoImpl) ds;
+      Table<Long, DeletedBlocksTransaction> delTxTable =
+          dnStoreTwoImpl.getDeleteTransactionTable();
+      try (BatchOperation batch = containerDB.getStore().getBatchHandler()
+          .initBatchOperation()) {
+        delTxTable.putWithBatch(batch, txnID, delTX);
+        newDeletionBlocks += delTX.getLocalIDList().size();
+        updateMetaData(containerData, delTX, newDeletionBlocks, containerDB,
+            batch);
+        containerDB.getStore().getBatchHandler().commitBatchOperation(batch);
+      }
+    }
+  }
+
+  private void markBlocksForDeletionSchemaV1(
       KeyValueContainerData containerData, DeletedBlocksTransaction delTX)
       throws IOException {
     long containerId = delTX.getContainerID();
+    if (!isTxnIdValid(containerId, containerData, delTX)) {
+      return;
+    }
+    int newDeletionBlocks = 0;
+    try (ReferenceCountedDB containerDB = BlockUtils
+        .getDB(containerData, conf)) {
+      Table<String, BlockData> blockDataTable =
+          containerDB.getStore().getBlockDataTable();
+      Table<String, ChunkInfoList> deletedBlocksTable =
+          containerDB.getStore().getDeletedBlocksTable();
+
+      try (BatchOperation batch = containerDB.getStore().getBatchHandler()
+          .initBatchOperation()) {
+        for (Long blkLong : delTX.getLocalIDList()) {
+          String blk = blkLong.toString();
+          BlockData blkInfo = blockDataTable.get(blk);
+          if (blkInfo != null) {
+            String deletingKey = OzoneConsts.DELETING_KEY_PREFIX + blk;
+            if (blockDataTable.get(deletingKey) != null
+                || deletedBlocksTable.get(blk) != null) {
+              if (LOG.isDebugEnabled()) {
+                LOG.debug(String.format(
+                    "Ignoring delete for block %s in container %d."
+                        + " Entry already added.", blk, containerId));
+              }
+              continue;
+            }
+            // Found the block in container db,
+            // use an atomic update to change its state to deleting.
+            blockDataTable.putWithBatch(batch, deletingKey, blkInfo);
+            blockDataTable.deleteWithBatch(batch, blk);
+            newDeletionBlocks++;
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("Transited Block {} to DELETING state in container {}",
+                  blk, containerId);
+            }
+          } else {
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("Block {} not found or already under deletion in"
+                  + " container {}, skip deleting it.", blk, containerId);
+            }
+          }
+        }
+        updateMetaData(containerData, delTX, newDeletionBlocks, containerDB,
+            batch);
+        containerDB.getStore().getBatchHandler().commitBatchOperation(batch);
+      } catch (IOException e) {
+        // if some blocks failed to delete, we fail this TX,
+        // without sending this ACK to SCM, SCM will resend the TX
+        // with a certain number of retries.
+        throw new IOException(
+            "Failed to delete blocks for TXID = " + delTX.getTxID(), e);
+      }
+    }
+  }
+
+  private void updateMetaData(KeyValueContainerData containerData,
+      DeletedBlocksTransaction delTX, int newDeletionBlocks,
+      ReferenceCountedDB containerDB, BatchOperation batchOperation)
+      throws IOException {
+    if (newDeletionBlocks > 0) {
+      // Finally commit the DB counters.
+      Table<String, Long> metadataTable =
+          containerDB.getStore().getMetadataTable();
+
+      // In memory is updated only when existing delete transactionID is
+      // greater.
+      if (delTX.getTxID() > containerData.getDeleteTransactionId()) {
+        // Update in DB pending delete key count and delete transaction ID.
+        metadataTable
+            .putWithBatch(batchOperation, OzoneConsts.DELETE_TRANSACTION_KEY,
+                delTX.getTxID());
+      }
+
+      long pendingDeleteBlocks =
+          containerData.getNumPendingDeletionBlocks() + newDeletionBlocks;
+      metadataTable
+          .putWithBatch(batchOperation, OzoneConsts.PENDING_DELETE_BLOCK_COUNT,
+              pendingDeleteBlocks);
+
+      // update pending deletion blocks count and delete transaction ID in
+      // in-memory container status
+      containerData.updateDeleteTransactionId(delTX.getTxID());
+      containerData.incrPendingDeletionBlocks(newDeletionBlocks);
+    }
+  }
+
+  private boolean isTxnIdValid(long containerId,
+      KeyValueContainerData containerData, DeletedBlocksTransaction delTX) {
+    boolean b = true;
     if (LOG.isDebugEnabled()) {
       LOG.debug("Processing Container : {}, DB path : {}", containerId,
           containerData.getMetadataPath());
@@ -202,92 +332,9 @@
                 + " Outdated delete transactionId %d < %d", containerId,
             delTX.getTxID(), containerData.getDeleteTransactionId()));
       }
-      return;
+      b = false;
     }
-
-    int newDeletionBlocks = 0;
-    try(ReferenceCountedDB containerDB =
-            BlockUtils.getDB(containerData, conf)) {
-      Table<String, BlockData> blockDataTable =
-              containerDB.getStore().getBlockDataTable();
-      Table<String, ChunkInfoList> deletedBlocksTable =
-              containerDB.getStore().getDeletedBlocksTable();
-
-      for (Long blkLong : delTX.getLocalIDList()) {
-        String blk = blkLong.toString();
-        BlockData blkInfo = blockDataTable.get(blk);
-        if (blkInfo != null) {
-          String deletingKey = OzoneConsts.DELETING_KEY_PREFIX + blk;
-
-          if (blockDataTable.get(deletingKey) != null
-              || deletedBlocksTable.get(blk) != null) {
-            if (LOG.isDebugEnabled()) {
-              LOG.debug(String.format(
-                  "Ignoring delete for block %s in container %d."
-                      + " Entry already added.", blk, containerId));
-            }
-            continue;
-          }
-
-          try(BatchOperation batch = containerDB.getStore()
-              .getBatchHandler().initBatchOperation()) {
-            // Found the block in container db,
-            // use an atomic update to change its state to deleting.
-            blockDataTable.putWithBatch(batch, deletingKey, blkInfo);
-            blockDataTable.deleteWithBatch(batch, blk);
-            containerDB.getStore().getBatchHandler()
-                .commitBatchOperation(batch);
-            newDeletionBlocks++;
-            if (LOG.isDebugEnabled()) {
-              LOG.debug("Transited Block {} to DELETING state in container {}",
-                  blk, containerId);
-            }
-          } catch (IOException e) {
-            // if some blocks failed to delete, we fail this TX,
-            // without sending this ACK to SCM, SCM will resend the TX
-            // with a certain number of retries.
-            throw new IOException(
-                "Failed to delete blocks for TXID = " + delTX.getTxID(), e);
-          }
-        } else {
-          if (LOG.isDebugEnabled()) {
-            LOG.debug("Block {} not found or already under deletion in"
-                + " container {}, skip deleting it.", blk, containerId);
-          }
-        }
-      }
-
-      if (newDeletionBlocks > 0) {
-        // Finally commit the DB counters.
-        try(BatchOperation batchOperation =
-                containerDB.getStore().getBatchHandler().initBatchOperation()) {
-          Table< String, Long > metadataTable = containerDB.getStore()
-              .getMetadataTable();
-
-          // In memory is updated only when existing delete transactionID is
-          // greater.
-          if (delTX.getTxID() > containerData.getDeleteTransactionId()) {
-            // Update in DB pending delete key count and delete transaction ID.
-            metadataTable.putWithBatch(batchOperation,
-                OzoneConsts.DELETE_TRANSACTION_KEY, delTX.getTxID());
-          }
-
-          long pendingDeleteBlocks =
-              containerData.getNumPendingDeletionBlocks() + newDeletionBlocks;
-          metadataTable.putWithBatch(batchOperation,
-              OzoneConsts.PENDING_DELETE_BLOCK_COUNT, pendingDeleteBlocks);
-
-          containerDB.getStore().getBatchHandler()
-              .commitBatchOperation(batchOperation);
-
-          // update pending deletion blocks count and delete transaction ID in
-          // in-memory container status
-          containerData.updateDeleteTransactionId(delTX.getTxID());
-
-          containerData.incrPendingDeletionBlocks(newDeletionBlocks);
-        }
-      }
-    }
+    return b;
   }
 
   @Override
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/SetNodeOperationalStateCommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/SetNodeOperationalStateCommandHandler.java
new file mode 100644
index 0000000..4a46d5f
--- /dev/null
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/SetNodeOperationalStateCommandHandler.java
@@ -0,0 +1,157 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.statemachine.commandhandler;
+
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.utils.HddsServerUtil;
+import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
+import org.apache.hadoop.ozone.container.common.statemachine.SCMConnectionManager;
+import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.ozone.protocol.commands.SetNodeOperationalStateCommand;
+import org.apache.hadoop.util.Time;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.hdds.protocol.proto.
+    StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+
+
+/**
+ * Handle the SetNodeOperationalStateCommand sent from SCM to the datanode
+ * to persist the current operational state.
+ */
+public class SetNodeOperationalStateCommandHandler implements CommandHandler {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(SetNodeOperationalStateCommandHandler.class);
+  private final ConfigurationSource conf;
+  private final AtomicInteger invocationCount = new AtomicInteger(0);
+  private final AtomicLong totalTime = new AtomicLong(0);
+
+  /**
+   * Set Node State command handler.
+   *
+   * @param conf - Configuration for the datanode.
+   */
+  public SetNodeOperationalStateCommandHandler(ConfigurationSource conf) {
+    this.conf = conf;
+  }
+
+  /**
+   * Handles a given SCM command.
+   *
+   * @param command - SCM Command
+   * @param container - Ozone Container.
+   * @param context - Current Context.
+   * @param connectionManager - The SCMs that we are talking to.
+   */
+  @Override
+  public void handle(SCMCommand command, OzoneContainer container,
+      StateContext context, SCMConnectionManager connectionManager) {
+    long startTime = Time.monotonicNow();
+    invocationCount.incrementAndGet();
+    StorageContainerDatanodeProtocolProtos.SetNodeOperationalStateCommandProto
+        setNodeCmdProto = null;
+
+    if (command.getType() != Type.setNodeOperationalStateCommand) {
+      LOG.warn("Skipping handling command, expected command "
+              + "type {} but found {}",
+          Type.setNodeOperationalStateCommand, command.getType());
+      return;
+    }
+    SetNodeOperationalStateCommand setNodeCmd =
+        (SetNodeOperationalStateCommand) command;
+    setNodeCmdProto = setNodeCmd.getProto();
+    DatanodeDetails dni = context.getParent().getDatanodeDetails();
+    dni.setPersistedOpState(setNodeCmdProto.getNodeOperationalState());
+    dni.setPersistedOpStateExpiryEpochSec(
+        setNodeCmd.getStateExpiryEpochSeconds());
+    try {
+      persistDatanodeDetails(dni);
+    } catch (IOException ioe) {
+      LOG.error("Failed to persist the datanode state", ioe);
+      // TODO - this should probably be raised, but it will break the command
+      //      handler interface.
+    }
+    totalTime.addAndGet(Time.monotonicNow() - startTime);
+  }
+
+  // TODO - this duplicates code in HddsDatanodeService and InitDatanodeState
+  //        Need to refactor.
+  private void persistDatanodeDetails(DatanodeDetails dnDetails)
+      throws IOException {
+    String idFilePath = HddsServerUtil.getDatanodeIdFilePath(conf);
+    if (idFilePath == null || idFilePath.isEmpty()) {
+      LOG.error("A valid path is needed for config setting {}",
+          ScmConfigKeys.OZONE_SCM_DATANODE_ID_DIR);
+      throw new IllegalArgumentException(
+          ScmConfigKeys.OZONE_SCM_DATANODE_ID_DIR +
+              " must be defined. See" +
+              " https://wiki.apache.org/hadoop/Ozone#Configuration" +
+              " for details on configuring Ozone.");
+    }
+
+    Preconditions.checkNotNull(idFilePath);
+    File idFile = new File(idFilePath);
+    ContainerUtils.writeDatanodeDetailsTo(dnDetails, idFile);
+  }
+
+  /**
+   * Returns the command type that this command handler handles.
+   *
+   * @return Type
+   */
+  @Override
+  public StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type
+      getCommandType() {
+    return Type.setNodeOperationalStateCommand;
+  }
+
+  /**
+   * Returns number of times this handler has been invoked.
+   *
+   * @return int
+   */
+  @Override
+  public int getInvocationCount() {
+    return invocationCount.intValue();
+  }
+
+  /**
+   * Returns the average time this function takes to run.
+   *
+   * @return long
+   */
+  @Override
+  public long getAverageRunTime() {
+    final int invocations = invocationCount.get();
+    return invocations == 0 ?
+        0 : totalTime.get() / invocations;
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/HeartbeatEndpointTask.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/HeartbeatEndpointTask.java
index 4a40496..7c6819d 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/HeartbeatEndpointTask.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/HeartbeatEndpointTask.java
@@ -56,6 +56,7 @@
 import org.apache.hadoop.ozone.protocol.commands.FinalizeNewLayoutVersionCommand;
 import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
 
+import org.apache.hadoop.ozone.protocol.commands.SetNodeOperationalStateCommand;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -177,15 +178,15 @@
       if (LOG.isDebugEnabled()) {
         LOG.debug("Sending heartbeat message :: {}", request.toString());
       }
-      SCMHeartbeatResponseProto reponse = rpcEndpoint.getEndPoint()
+      SCMHeartbeatResponseProto response = rpcEndpoint.getEndPoint()
           .sendHeartbeat(request);
-      processResponse(reponse, datanodeDetailsProto);
+      processResponse(response, datanodeDetailsProto);
       rpcEndpoint.setLastSuccessfulHeartbeat(ZonedDateTime.now());
       rpcEndpoint.zeroMissedCount();
     } catch (IOException ex) {
+      Preconditions.checkState(requestBuilder != null);
       // put back the reports which failed to be sent
       putBackReports(requestBuilder);
-
       rpcEndpoint.logIfNeeded(ex);
     } finally {
       rpcEndpoint.unlock();
@@ -196,12 +197,9 @@
   // TODO: Make it generic.
   private void putBackReports(SCMHeartbeatRequestProto.Builder requestBuilder) {
     List<GeneratedMessage> reports = new LinkedList<>();
-    if (requestBuilder.hasContainerReport()) {
-      reports.add(requestBuilder.getContainerReport());
-    }
-    if (requestBuilder.hasNodeReport()) {
-      reports.add(requestBuilder.getNodeReport());
-    }
+    // We only put back CommandStatusReports and IncrementalContainerReport
+    // because those are incremental. Container/Node/PipelineReport are
+    // accumulative so we can keep only the latest of each.
     if (requestBuilder.getCommandStatusReportsCount() != 0) {
       reports.addAll(requestBuilder.getCommandStatusReportsList());
     }
@@ -229,6 +227,7 @@
           } else {
             requestBuilder.setField(descriptor, report);
           }
+          break;
         }
       }
     }
@@ -377,6 +376,17 @@
         }
         this.context.addCommand(finalizeNewLayoutVersionCommand);
         break;
+      case setNodeOperationalStateCommand:
+        SetNodeOperationalStateCommand setNodeOperationalStateCommand =
+            SetNodeOperationalStateCommand.getFromProtobuf(
+                commandResponseProto.getSetNodeOperationalStateCommandProto());
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("Received SCM set operational state command. State: {} " +
+              "Expiry: {}", setNodeOperationalStateCommand.getOpState(),
+              setNodeOperationalStateCommand.getStateExpiryEpochSeconds());
+        }
+        this.context.addCommand(setNodeOperationalStateCommand);
+        break;
       default:
         throw new IllegalArgumentException("Unknown response : "
             + commandResponseProto.getCommandType().name());
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
index 3647af1..d59efdc 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
@@ -45,9 +45,7 @@
 import io.opentracing.Scope;
 import io.opentracing.Span;
 import io.opentracing.util.GlobalTracer;
-import org.apache.ratis.thirdparty.io.grpc.BindableService;
 import org.apache.ratis.thirdparty.io.grpc.Server;
-import org.apache.ratis.thirdparty.io.grpc.ServerBuilder;
 import org.apache.ratis.thirdparty.io.grpc.ServerInterceptors;
 import org.apache.ratis.thirdparty.io.grpc.netty.GrpcSslContexts;
 import org.apache.ratis.thirdparty.io.grpc.netty.NettyServerBuilder;
@@ -78,8 +76,7 @@
    */
   public XceiverServerGrpc(DatanodeDetails datanodeDetails,
       ConfigurationSource conf,
-      ContainerDispatcher dispatcher, CertificateClient caClient,
-      BindableService... additionalServices) {
+      ContainerDispatcher dispatcher, CertificateClient caClient) {
     Preconditions.checkNotNull(conf);
 
     this.id = datanodeDetails.getUuid();
@@ -92,17 +89,10 @@
       this.port = 0;
     }
 
-    NettyServerBuilder nettyServerBuilder =
-        ((NettyServerBuilder) ServerBuilder.forPort(port))
-            .maxInboundMessageSize(OzoneConsts.OZONE_SCM_CHUNK_MAX_SIZE);
-
-    GrpcServerInterceptor tracingInterceptor = new GrpcServerInterceptor();
-    nettyServerBuilder.addService(ServerInterceptors.intercept(
-        new GrpcXceiverService(dispatcher), tracingInterceptor));
-
-    for (BindableService service : additionalServices) {
-      nettyServerBuilder.addService(service);
-    }
+    NettyServerBuilder nettyServerBuilder = NettyServerBuilder.forPort(port)
+        .maxInboundMessageSize(OzoneConsts.OZONE_SCM_CHUNK_MAX_SIZE)
+        .addService(ServerInterceptors.intercept(
+            new GrpcXceiverService(dispatcher), new GrpcServerInterceptor()));
 
     SecurityConfig secConf = new SecurityConfig(conf);
     if (secConf.isGrpcTlsEnabled()) {
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
index 42373bd..5182279 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
@@ -299,8 +299,8 @@
             snapshotFile);
         throw ioe;
       }
-      LOG.info("{}: Finished taking a snapshot at:{} file:{} time:{}", gid, ti,
-          snapshotFile, (Time.monotonicNow() - startTime));
+      LOG.info("{}: Finished taking a snapshot at:{} file:{} took: {} ms",
+          gid, ti, snapshotFile, (Time.monotonicNow() - startTime));
       return ti.getIndex();
     }
     return -1;
@@ -418,9 +418,9 @@
       ContainerCommandRequestProto requestProto, long entryIndex, long term,
       long startTime) {
     final WriteChunkRequestProto write = requestProto.getWriteChunk();
-    RaftServer server = ratisServer.getServer();
     try {
-      if (server.getDivision(gid).getInfo().isLeader()) {
+      RaftServer.Division division = ratisServer.getServerDivision();
+      if (division.getInfo().isLeader()) {
         stateMachineDataCache.put(entryIndex, write.getData());
       }
     } catch (InterruptedException ioe) {
@@ -445,7 +445,7 @@
             return runCommand(requestProto, context);
           } catch (Exception e) {
             LOG.error("{}: writeChunk writeStateMachineData failed: blockId" +
-                "{} logIndex {} chunkName {} {}", gid, write.getBlockID(),
+                "{} logIndex {} chunkName {}", gid, write.getBlockID(),
                 entryIndex, write.getChunkData().getChunkName(), e);
             metrics.incNumWriteDataFails();
             // write chunks go in parallel. It's possible that one write chunk
@@ -458,8 +458,8 @@
 
     writeChunkFutureMap.put(entryIndex, writeChunkFuture);
     if (LOG.isDebugEnabled()) {
-      LOG.error("{}: writeChunk writeStateMachineData : blockId" +
-              "{} logIndex {} chunkName {} {}", gid, write.getBlockID(),
+      LOG.debug("{}: writeChunk writeStateMachineData : blockId" +
+              "{} logIndex {} chunkName {}", gid, write.getBlockID(),
           entryIndex, write.getChunkData().getChunkName());
     }
     // Remove the future once it finishes execution from the
@@ -760,7 +760,8 @@
             }
           }, getCommandExecutor(requestProto));
       future.thenApply(r -> {
-        if (trx.getServerRole() == RaftPeerRole.LEADER) {
+        if (trx.getServerRole() == RaftPeerRole.LEADER
+            && trx.getStateMachineContext() != null) {
           long startTime = (long) trx.getStateMachineContext();
           metrics.incPipelineLatency(cmdType,
               Time.monotonicNowNanos() - startTime);
@@ -808,6 +809,12 @@
         }
         return applyTransactionFuture;
       }).whenComplete((r, t) ->  {
+        if (t != null) {
+          stateMachineHealthy.set(false);
+          LOG.error("gid {} : ApplyTransaction failed. cmd {} logIndex "
+                  + "{} exception {}", gid, requestProto.getCmdType(),
+              index, t);
+        }
         applyTransactionSemaphore.release();
         metrics.recordApplyTransactionCompletion(
             Time.monotonicNowNanos() - applyTxnStartTime);
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
index eca0b1c..faa69a8 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
@@ -515,6 +515,15 @@
     return server;
   }
 
+  public RaftServer.Division getServerDivision() throws IOException {
+    return getServerDivision(server.getGroupIds().iterator().next());
+  }
+
+  public RaftServer.Division getServerDivision(RaftGroupId id)
+      throws IOException {
+    return server.getDivision(id);
+  }
+
   private void processReply(RaftClientReply reply) throws IOException {
     // NotLeader exception is thrown only when the raft server to which the
     // request is submitted is not the leader. The request will be rejected
@@ -596,10 +605,16 @@
   private RaftClientRequest createRaftClientRequest(
       ContainerCommandRequestProto request, HddsProtos.PipelineID pipelineID,
       RaftClientRequest.Type type) {
-    return new RaftClientRequest(clientId, server.getId(),
-        RaftGroupId.valueOf(PipelineID.getFromProtobuf(pipelineID).getId()),
-        nextCallId(), ContainerCommandRequestMessage.toMessage(request, null),
-        type, null);
+    return RaftClientRequest.newBuilder()
+        .setClientId(clientId)
+        .setServerId(server.getId())
+        .setGroupId(
+            RaftGroupId.valueOf(
+                PipelineID.getFromProtobuf(pipelineID).getId()))
+        .setCallId(nextCallId())
+        .setMessage(ContainerCommandRequestMessage.toMessage(request, null))
+        .setType(type)
+        .build();
   }
 
   private GroupInfoRequest createGroupInfoRequest(
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
index a239b5f..53d6162 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
@@ -516,22 +516,38 @@
   @Override
   public void exportContainerData(OutputStream destination,
       ContainerPacker<KeyValueContainerData> packer) throws IOException {
-    // Closed/ Quasi closed containers are considered for replication by
-    // replication manager if they are under-replicated.
-    ContainerProtos.ContainerDataProto.State state =
-        getContainerData().getState();
-    if (!(state == ContainerProtos.ContainerDataProto.State.CLOSED ||
-        state == ContainerDataProto.State.QUASI_CLOSED)) {
-      throw new IllegalStateException(
-          "Only closed/quasi closed containers could be exported: " +
-              "Where as ContainerId="
-              + getContainerData().getContainerID() + " is in state " + state);
+    writeLock();
+    try {
+      // Closed/ Quasi closed containers are considered for replication by
+      // replication manager if they are under-replicated.
+      ContainerProtos.ContainerDataProto.State state =
+          getContainerData().getState();
+      if (!(state == ContainerProtos.ContainerDataProto.State.CLOSED ||
+          state == ContainerDataProto.State.QUASI_CLOSED)) {
+        throw new IllegalStateException(
+            "Only (quasi)closed containers can be exported, but " +
+                "ContainerId=" + getContainerData().getContainerID() +
+                " is in state " + state);
+      }
+
+      try {
+        compactDB();
+        // Close DB (and remove from cache) to avoid concurrent modification
+        // while packing it.
+        BlockUtils.removeDB(containerData, config);
+      } finally {
+        readLock();
+        writeUnlock();
+      }
+
+      packer.pack(this, destination);
+    } finally {
+      if (lock.isWriteLockedByCurrentThread()) {
+        writeUnlock();
+      } else {
+        readUnlock();
+      }
     }
-    compactDB();
-    // Close DB (and remove from cache) to avoid concurrent modification while
-    // packing it.
-    BlockUtils.removeDB(containerData, config);
-    packer.pack(this, destination);
   }
 
   /**
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
index 70f4ffc..dbc2a97 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
@@ -933,13 +933,8 @@
       final OutputStream outputStream,
       final TarContainerPacker packer)
       throws IOException{
-    container.readLock();
-    try {
-      final KeyValueContainer kvc = (KeyValueContainer) container;
-      kvc.exportContainerData(outputStream, packer);
-    } finally {
-      container.readUnlock();
-    }
+    final KeyValueContainer kvc = (KeyValueContainer) container;
+    kvc.exportContainerData(outputStream, packer);
   }
 
   @Override
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
index b03b7d7..3dab1fa 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
@@ -20,11 +20,12 @@
 
 import java.io.File;
 import java.io.IOException;
+import java.util.UUID;
 import java.util.LinkedList;
+import java.util.Objects;
+import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
-import java.util.Objects;
-import java.util.UUID;
 import java.util.concurrent.TimeUnit;
 import java.util.stream.Collectors;
 
@@ -32,14 +33,15 @@
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
-import org.apache.hadoop.hdds.utils.BackgroundService;
-import org.apache.hadoop.hdds.utils.BackgroundTask;
-import org.apache.hadoop.hdds.utils.BackgroundTaskQueue;
 import org.apache.hadoop.hdds.utils.BackgroundTaskResult;
 import org.apache.hadoop.hdds.utils.db.BatchOperation;
 import org.apache.hadoop.hdds.utils.MetadataKeyFilters;
+import org.apache.hadoop.hdds.utils.BackgroundTaskQueue;
+import org.apache.hadoop.hdds.utils.BackgroundService;
+import org.apache.hadoop.hdds.utils.BackgroundTask;
 import org.apache.hadoop.hdds.utils.MetadataKeyFilters.KeyPrefixFilter;
 import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.hdds.utils.db.TableIterator;
 import org.apache.hadoop.ozone.container.common.helpers.BlockData;
 import org.apache.hadoop.ozone.container.common.impl.ContainerData;
 import org.apache.hadoop.ozone.container.common.impl.TopNOrderedContainerDeletionChoosingPolicy;
@@ -50,14 +52,22 @@
 import org.apache.hadoop.ozone.container.common.utils.ReferenceCountedDB;
 import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
 import org.apache.hadoop.ozone.container.keyvalue.helpers.BlockUtils;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStore;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStoreSchemaTwoImpl;
 import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
 import org.apache.hadoop.util.Time;
+import org.apache.hadoop.hdds.protocol.proto
+    .StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
 
 import com.google.common.collect.Lists;
+
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL_DEFAULT;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER_DEFAULT;
+import static org.apache.hadoop.ozone.OzoneConsts.SCHEMA_V1;
+import static org.apache.hadoop.ozone.OzoneConsts.SCHEMA_V2;
+
 import org.apache.ratis.thirdparty.com.google.protobuf.InvalidProtocolBufferException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -66,7 +76,7 @@
  * A per-datanode container block deleting service takes in charge
  * of deleting staled ozone blocks.
  */
-// TODO: Fix BlockDeletingService to work with new StorageLayer
+
 public class BlockDeletingService extends BackgroundService {
 
   private static final Logger LOG =
@@ -244,21 +254,54 @@
 
     @Override
     public BackgroundTaskResult call() throws Exception {
-      ContainerBackgroundTaskResult crr = new ContainerBackgroundTaskResult();
+      ContainerBackgroundTaskResult crr;
       final Container container = ozoneContainer.getContainerSet()
           .getContainer(containerData.getContainerID());
       container.writeLock();
+      File dataDir = new File(containerData.getChunksPath());
       long startTime = Time.monotonicNow();
       // Scan container's db and get list of under deletion blocks
       try (ReferenceCountedDB meta = BlockUtils.getDB(containerData, conf)) {
+        if (containerData.getSchemaVersion().equals(SCHEMA_V1)) {
+          crr = deleteViaSchema1(meta, container, dataDir, startTime);
+        } else if (containerData.getSchemaVersion().equals(SCHEMA_V2)) {
+          crr = deleteViaSchema2(meta, container, dataDir, startTime);
+        } else {
+          throw new UnsupportedOperationException(
+              "Only schema version 1 and schema version 2 are supported.");
+        }
+        return crr;
+      } finally {
+        container.writeUnlock();
+      }
+    }
+
+    public boolean checkDataDir(File dataDir) {
+      boolean b = true;
+      if (!dataDir.exists() || !dataDir.isDirectory()) {
+        LOG.error("Invalid container data dir {} : "
+            + "does not exist or not a directory", dataDir.getAbsolutePath());
+        b = false;
+      }
+      return b;
+    }
+
+    public ContainerBackgroundTaskResult deleteViaSchema1(
+        ReferenceCountedDB meta, Container container, File dataDir,
+        long startTime) throws IOException {
+      ContainerBackgroundTaskResult crr = new ContainerBackgroundTaskResult();
+      if (!checkDataDir(dataDir)) {
+        return crr;
+      }
+      try {
         Table<String, BlockData> blockDataTable =
-                meta.getStore().getBlockDataTable();
+            meta.getStore().getBlockDataTable();
 
         // # of blocks to delete is throttled
         KeyPrefixFilter filter = MetadataKeyFilters.getDeletingKeyFilter();
         List<? extends Table.KeyValue<String, BlockData>> toDeleteBlocks =
             blockDataTable.getSequentialRangeKVs(null, blockLimitPerTask,
-                    filter);
+                filter);
         if (toDeleteBlocks.isEmpty()) {
           LOG.debug("No under deletion block found in container : {}",
               containerData.getContainerID());
@@ -267,12 +310,6 @@
         List<String> succeedBlocks = new LinkedList<>();
         LOG.debug("Container : {}, To-Delete blocks : {}",
             containerData.getContainerID(), toDeleteBlocks.size());
-        File dataDir = new File(containerData.getChunksPath());
-        if (!dataDir.exists() || !dataDir.isDirectory()) {
-          LOG.error("Invalid container data dir {} : "
-              + "does not exist or not a directory", dataDir.getAbsolutePath());
-          return crr;
-        }
 
         Handler handler = Objects.requireNonNull(ozoneContainer.getDispatcher()
             .getHandler(container.getContainerType()));
@@ -292,7 +329,7 @@
 
         // Once blocks are deleted... remove the blockID from blockDataTable.
         try(BatchOperation batch = meta.getStore().getBatchHandler()
-                .initBatchOperation()) {
+            .initBatchOperation()) {
           for (String entry : succeedBlocks) {
             blockDataTable.deleteWithBatch(batch, entry);
           }
@@ -312,8 +349,106 @@
         }
         crr.addAll(succeedBlocks);
         return crr;
-      } finally {
-        container.writeUnlock();
+      } catch (IOException exception) {
+        LOG.warn(
+            "Deletion operation was not successful for container: " + container
+                .getContainerData().getContainerID(), exception);
+        throw exception;
+      }
+    }
+
+    public ContainerBackgroundTaskResult deleteViaSchema2(
+        ReferenceCountedDB meta, Container container, File dataDir,
+        long startTime) throws IOException {
+      ContainerBackgroundTaskResult crr = new ContainerBackgroundTaskResult();
+      if (!checkDataDir(dataDir)) {
+        return crr;
+      }
+      try {
+        Table<String, BlockData> blockDataTable =
+            meta.getStore().getBlockDataTable();
+        DatanodeStore ds = meta.getStore();
+        DatanodeStoreSchemaTwoImpl dnStoreTwoImpl =
+            (DatanodeStoreSchemaTwoImpl) ds;
+        Table<Long, DeletedBlocksTransaction>
+            deleteTxns = dnStoreTwoImpl.getDeleteTransactionTable();
+        List<DeletedBlocksTransaction> delBlocks = new ArrayList<>();
+        int totalBlocks = 0;
+        try (TableIterator<Long,
+            ? extends Table.KeyValue<Long, DeletedBlocksTransaction>> iter =
+            dnStoreTwoImpl.getDeleteTransactionTable().iterator()) {
+          while (iter.hasNext() && (totalBlocks < blockLimitPerTask)) {
+            DeletedBlocksTransaction delTx = iter.next().getValue();
+            totalBlocks += delTx.getLocalIDList().size();
+            delBlocks.add(delTx);
+          }
+        }
+
+        if (delBlocks.isEmpty()) {
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("No transaction found in container : {}",
+                containerData.getContainerID());
+          }
+          return crr;
+        }
+
+        LOG.debug("Container : {}, To-Delete blocks : {}",
+            containerData.getContainerID(), delBlocks.size());
+
+        Handler handler = Objects.requireNonNull(ozoneContainer.getDispatcher()
+            .getHandler(container.getContainerType()));
+
+        deleteTransactions(delBlocks, handler, blockDataTable, container);
+
+        // Once blocks are deleted... remove the blockID from blockDataTable
+        // and also remove the transactions from txnTable.
+        try(BatchOperation batch = meta.getStore().getBatchHandler()
+            .initBatchOperation()) {
+          for (DeletedBlocksTransaction delTx : delBlocks) {
+            deleteTxns.deleteWithBatch(batch, delTx.getTxID());
+            for (Long blk : delTx.getLocalIDList()) {
+              String bID = blk.toString();
+              meta.getStore().getBlockDataTable().deleteWithBatch(batch, bID);
+            }
+          }
+          meta.getStore().getBatchHandler().commitBatchOperation(batch);
+          containerData.updateAndCommitDBCounters(meta, batch,
+              totalBlocks);
+          // update count of pending deletion blocks and block count in
+          // in-memory container status.
+          containerData.decrPendingDeletionBlocks(totalBlocks);
+          containerData.decrKeyCount(totalBlocks);
+        }
+
+        LOG.info("Container: {}, deleted blocks: {}, task elapsed time: {}ms",
+            containerData.getContainerID(), totalBlocks,
+            Time.monotonicNow() - startTime);
+
+        return crr;
+      } catch (IOException exception) {
+        LOG.warn(
+            "Deletion operation was not successful for container: " + container
+                .getContainerData().getContainerID(), exception);
+        throw exception;
+      }
+    }
+
+    private void deleteTransactions(List<DeletedBlocksTransaction> delBlocks,
+        Handler handler, Table<String, BlockData> blockDataTable,
+        Container container) throws IOException {
+      for (DeletedBlocksTransaction entry : delBlocks) {
+        for (Long blkLong : entry.getLocalIDList()) {
+          String blk = blkLong.toString();
+          BlockData blkInfo = blockDataTable.get(blk);
+          LOG.debug("Deleting block {}", blk);
+          try {
+            handler.deleteBlock(container, blkInfo);
+          } catch (InvalidProtocolBufferException e) {
+            LOG.error("Failed to parse block info for block {}", blk, e);
+          } catch (IOException e) {
+            LOG.error("Failed to delete files for block {}", blk, e);
+          }
+        }
       }
     }
 
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/AbstractDatanodeDBDefinition.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/AbstractDatanodeDBDefinition.java
index 8895475..2fb1174 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/AbstractDatanodeDBDefinition.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/AbstractDatanodeDBDefinition.java
@@ -60,7 +60,7 @@
   @Override
   public DBColumnFamilyDefinition[] getColumnFamilies() {
     return new DBColumnFamilyDefinition[] {getBlockDataColumnFamily(),
-            getMetadataColumnFamily(), getDeletedBlocksColumnFamily()};
+        getMetadataColumnFamily(), getDeletedBlocksColumnFamily()};
   }
 
   public abstract DBColumnFamilyDefinition<String, BlockData>
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaOneDBDefinition.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaOneDBDefinition.java
index faf399d..7d5e053 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaOneDBDefinition.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaOneDBDefinition.java
@@ -88,4 +88,9 @@
       getDeletedBlocksColumnFamily() {
     return DELETED_BLOCKS;
   }
+
+  public DBColumnFamilyDefinition[] getColumnFamilies() {
+    return new DBColumnFamilyDefinition[] {getBlockDataColumnFamily(),
+        getMetadataColumnFamily(), getDeletedBlocksColumnFamily() };
+  }
 }
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaTwoDBDefinition.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaTwoDBDefinition.java
index 2ac56f2..1fabd13 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaTwoDBDefinition.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeSchemaTwoDBDefinition.java
@@ -17,16 +17,19 @@
  */
 package org.apache.hadoop.ozone.container.metadata;
 
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
 import org.apache.hadoop.hdds.utils.db.DBColumnFamilyDefinition;
 import org.apache.hadoop.hdds.utils.db.LongCodec;
 import org.apache.hadoop.hdds.utils.db.StringCodec;
 import org.apache.hadoop.ozone.container.common.helpers.BlockData;
 import org.apache.hadoop.ozone.container.common.helpers.ChunkInfoList;
+import org.apache.hadoop.hdds.protocol.proto
+    .StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
 
 /**
  * This class defines the RocksDB structure for datanodes following schema
- * version 2, where the block data, metadata, and deleted block ids are put in
- * their own separate column families.
+ * version 2, where the block data, metadata, and transactions which are to be
+ * deleted are put in their own separate column families.
  */
 public class DatanodeSchemaTwoDBDefinition extends
         AbstractDatanodeDBDefinition {
@@ -34,7 +37,7 @@
   public static final DBColumnFamilyDefinition<String, BlockData>
           BLOCK_DATA =
           new DBColumnFamilyDefinition<>(
-                  "block_data",
+                  "blockData",
                   String.class,
                   new StringCodec(),
                   BlockData.class,
@@ -52,17 +55,33 @@
   public static final DBColumnFamilyDefinition<String, ChunkInfoList>
           DELETED_BLOCKS =
           new DBColumnFamilyDefinition<>(
-                  "deleted_blocks",
+                  "deletedBlocks",
                   String.class,
                   new StringCodec(),
                   ChunkInfoList.class,
                   new ChunkInfoListCodec());
 
+  public static final DBColumnFamilyDefinition<Long, DeletedBlocksTransaction>
+      DELETE_TRANSACTION =
+      new DBColumnFamilyDefinition<>(
+          "deleteTxns",
+          Long.class,
+          new LongCodec(),
+          StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction.class,
+          new DeletedBlocksTransactionCodec());
+
   protected DatanodeSchemaTwoDBDefinition(String dbPath) {
     super(dbPath);
   }
 
   @Override
+  public DBColumnFamilyDefinition[] getColumnFamilies() {
+    return new DBColumnFamilyDefinition[] {getBlockDataColumnFamily(),
+        getMetadataColumnFamily(), getDeletedBlocksColumnFamily(),
+        getDeleteTransactionsColumnFamily()};
+  }
+
+  @Override
   public DBColumnFamilyDefinition<String, BlockData>
       getBlockDataColumnFamily() {
     return BLOCK_DATA;
@@ -78,4 +97,9 @@
       getDeletedBlocksColumnFamily() {
     return DELETED_BLOCKS;
   }
+
+  public DBColumnFamilyDefinition<Long, DeletedBlocksTransaction>
+      getDeleteTransactionsColumnFamily() {
+    return DELETE_TRANSACTION;
+  }
 }
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeStoreSchemaTwoImpl.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeStoreSchemaTwoImpl.java
index df9b8c0..db8fe6b 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeStoreSchemaTwoImpl.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DatanodeStoreSchemaTwoImpl.java
@@ -18,6 +18,9 @@
 package org.apache.hadoop.ozone.container.metadata;
 
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.protocol.proto.
+    StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
+import org.apache.hadoop.hdds.utils.db.Table;
 
 import java.io.IOException;
 
@@ -26,10 +29,13 @@
  * three column families/tables:
  * 1. A block data table.
  * 2. A metadata table.
- * 3. A deleted blocks table.
+ * 3. A Delete Transaction Table.
  */
 public class DatanodeStoreSchemaTwoImpl extends AbstractDatanodeStore {
 
+  private final Table<Long, DeletedBlocksTransaction>
+      deleteTransactionTable;
+
   /**
    * Constructs the datanode store and starts the DB Services.
    *
@@ -41,5 +47,11 @@
       throws IOException {
     super(config, containerID, new DatanodeSchemaTwoDBDefinition(dbPath),
         openReadOnly);
+    this.deleteTransactionTable = new DatanodeSchemaTwoDBDefinition(dbPath)
+        .getDeleteTransactionsColumnFamily().getTable(getStore());
+  }
+
+  public Table<Long, DeletedBlocksTransaction> getDeleteTransactionTable() {
+    return deleteTransactionTable;
   }
 }
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DeletedBlocksTransactionCodec.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DeletedBlocksTransactionCodec.java
new file mode 100644
index 0000000..90c26fe
--- /dev/null
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/DeletedBlocksTransactionCodec.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.container.metadata;
+
+import org.apache.hadoop.hdds.utils.db.Codec;
+import org.apache.hadoop.hdds.protocol.proto
+    .StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
+
+import java.io.IOException;
+
+/**
+ * Supports encoding and decoding {@link DeletedBlocksTransaction} objects.
+ */
+public class DeletedBlocksTransactionCodec
+    implements Codec<DeletedBlocksTransaction> {
+
+  @Override public byte[] toPersistedFormat(
+      DeletedBlocksTransaction deletedBlocksTransaction) {
+    return deletedBlocksTransaction.toByteArray();
+  }
+
+  @Override public DeletedBlocksTransaction fromPersistedFormat(byte[] rawData)
+      throws IOException {
+    return DeletedBlocksTransaction.parseFrom(rawData);
+  }
+
+  @Override public DeletedBlocksTransaction copyObject(
+      DeletedBlocksTransaction deletedBlocksTransaction) {
+    throw new UnsupportedOperationException();
+  }
+}
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
index a44ef38..3ecddac 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
@@ -29,6 +29,7 @@
 
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails.Port.Name;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerType;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto;
@@ -52,8 +53,8 @@
 import org.apache.hadoop.ozone.container.common.volume.HddsVolume;
 import org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet;
 import org.apache.hadoop.ozone.container.keyvalue.statemachine.background.BlockDeletingService;
-import org.apache.hadoop.ozone.container.replication.GrpcReplicationService;
-import org.apache.hadoop.ozone.container.replication.OnDemandContainerReplicationSource;
+import org.apache.hadoop.ozone.container.replication.ReplicationServer;
+import org.apache.hadoop.ozone.container.replication.ReplicationServer.ReplicationConfig;
 import org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -85,19 +86,25 @@
   private List<ContainerDataScanner> dataScanners;
   private final BlockDeletingService blockDeletingService;
   private final GrpcTlsConfig tlsClientConfig;
+  private final ReplicationServer replicationServer;
+  private DatanodeDetails datanodeDetails;
 
   /**
    * Construct OzoneContainer object.
+   *
    * @param datanodeDetails
    * @param conf
    * @param certClient
    * @throws DiskOutOfSpaceException
    * @throws IOException
    */
-  public OzoneContainer(DatanodeDetails datanodeDetails, ConfigurationSource
-      conf, StateContext context, CertificateClient certClient)
+  public OzoneContainer(
+      DatanodeDetails datanodeDetails, ConfigurationSource
+      conf, StateContext context, CertificateClient certClient
+  )
       throws IOException {
     config = conf;
+    this.datanodeDetails = datanodeDetails;
     volumeSet = new MutableVolumeSet(datanodeDetails.getUuidString(), conf);
     volumeSet.setFailedVolumeListener(this::handleVolumeFailures);
     containerSet = new ContainerSet();
@@ -135,14 +142,22 @@
      * XceiverServerGrpc is the read channel
      */
     controller = new ContainerController(containerSet, handlers);
+
     writeChannel = XceiverServerRatis.newXceiverServerRatis(
         datanodeDetails, config, hddsDispatcher, controller, certClient,
         context);
+
+    replicationServer = new ReplicationServer(
+        controller,
+        conf.getObject(ReplicationConfig.class),
+        secConf,
+        certClient);
+
     readChannel = new XceiverServerGrpc(
-        datanodeDetails, config, hddsDispatcher, certClient,
-        createReplicationService());
+        datanodeDetails, config, hddsDispatcher, certClient);
     Duration svcInterval = conf.getObject(
             DatanodeConfiguration.class).getBlockDeletionInterval();
+
     long serviceTimeout = config
         .getTimeDuration(OZONE_BLOCK_DELETING_SERVICE_TIMEOUT,
             OZONE_BLOCK_DELETING_SERVICE_TIMEOUT_DEFAULT,
@@ -158,10 +173,7 @@
     return tlsClientConfig;
   }
 
-  private GrpcReplicationService createReplicationService() {
-    return new GrpcReplicationService(
-        new OnDemandContainerReplicationSource(controller));
-  }
+
 
   /**
    * Build's container map.
@@ -169,7 +181,7 @@
   private void buildContainerSet() {
     Iterator<HddsVolume> volumeSetIterator = volumeSet.getVolumesList()
         .iterator();
-    ArrayList<Thread> volumeThreads = new ArrayList<Thread>();
+    ArrayList<Thread> volumeThreads = new ArrayList<>();
     long startTime = System.currentTimeMillis();
 
     //TODO: diskchecker should be run before this, to see how disks are.
@@ -242,6 +254,10 @@
   public void start(String scmId) throws IOException {
     LOG.info("Attempting to start container services.");
     startContainerScrub();
+
+    replicationServer.start();
+    datanodeDetails.setPort(Name.REPLICATION, replicationServer.getPort());
+
     writeChannel.start();
     readChannel.start();
     hddsDispatcher.init();
@@ -256,6 +272,7 @@
     //TODO: at end of container IO integration work.
     LOG.info("Attempting to stop container services.");
     stopContainerScrub();
+    replicationServer.stop();
     writeChannel.stop();
     readChannel.stop();
     this.handlers.values().forEach(Handler::stop);
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationClient.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationClient.java
index 275321d..53dac9d 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationClient.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationClient.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.ozone.container.replication;
 
+import java.io.File;
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.OutputStream;
@@ -40,6 +41,7 @@
 import org.apache.ratis.thirdparty.io.grpc.netty.GrpcSslContexts;
 import org.apache.ratis.thirdparty.io.grpc.netty.NettyChannelBuilder;
 import org.apache.ratis.thirdparty.io.grpc.stub.StreamObserver;
+import org.apache.ratis.thirdparty.io.netty.handler.ssl.ClientAuth;
 import org.apache.ratis.thirdparty.io.netty.handler.ssl.SslContextBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -58,20 +60,27 @@
 
   private final Path workingDirectory;
 
-  public GrpcReplicationClient(String host, int port, Path workingDir,
-      SecurityConfig secConfig, X509Certificate caCert) throws IOException {
+  public GrpcReplicationClient(
+      String host, int port, Path workingDir,
+      SecurityConfig secConfig, X509Certificate caCert
+  ) throws IOException {
     NettyChannelBuilder channelBuilder =
         NettyChannelBuilder.forAddress(host, port)
             .usePlaintext()
             .maxInboundMessageSize(OzoneConsts.OZONE_SCM_CHUNK_MAX_SIZE);
 
-    if (secConfig.isGrpcTlsEnabled()) {
+    if (secConfig.isSecurityEnabled()) {
       channelBuilder.useTransportSecurity();
 
       SslContextBuilder sslContextBuilder = GrpcSslContexts.forClient();
       if (caCert != null) {
         sslContextBuilder.trustManager(caCert);
       }
+
+      sslContextBuilder.clientAuth(ClientAuth.REQUIRE);
+      sslContextBuilder.keyManager(
+          new File(secConfig.getCertificateFileName()),
+          new File(secConfig.getPrivateKeyFileName()));
       if (secConfig.useTestCert()) {
         channelBuilder.overrideAuthority("localhost");
       }
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ReplicationServer.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ReplicationServer.java
new file mode 100644
index 0000000..e8f831b
--- /dev/null
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ReplicationServer.java
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.container.replication;
+
+import javax.net.ssl.SSLException;
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.hdds.conf.Config;
+import org.apache.hadoop.hdds.conf.ConfigGroup;
+import org.apache.hadoop.hdds.conf.ConfigTag;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.hdds.security.x509.certificate.client.CertificateClient;
+import org.apache.hadoop.hdds.tracing.GrpcServerInterceptor;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.container.ozoneimpl.ContainerController;
+
+import org.apache.ratis.thirdparty.io.grpc.Server;
+import org.apache.ratis.thirdparty.io.grpc.ServerInterceptors;
+import org.apache.ratis.thirdparty.io.grpc.netty.GrpcSslContexts;
+import org.apache.ratis.thirdparty.io.grpc.netty.NettyServerBuilder;
+import org.apache.ratis.thirdparty.io.netty.handler.ssl.ClientAuth;
+import org.apache.ratis.thirdparty.io.netty.handler.ssl.SslContextBuilder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Separated network server for server2server container replication.
+ */
+public class ReplicationServer {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ReplicationServer.class);
+
+  private Server server;
+
+  private SecurityConfig secConf;
+
+  private CertificateClient caClient;
+
+  private ContainerController controller;
+
+  private int port;
+
+  public ReplicationServer(
+      ContainerController controller,
+      ReplicationConfig replicationConfig,
+      SecurityConfig secConf,
+      CertificateClient caClient
+  ) {
+    this.secConf = secConf;
+    this.caClient = caClient;
+    this.controller = controller;
+    this.port = replicationConfig.getPort();
+    init();
+  }
+
+  public void init() {
+    NettyServerBuilder nettyServerBuilder = NettyServerBuilder.forPort(port)
+        .maxInboundMessageSize(OzoneConsts.OZONE_SCM_CHUNK_MAX_SIZE)
+        .addService(ServerInterceptors.intercept(new GrpcReplicationService(
+            new OnDemandContainerReplicationSource(controller)
+        ), new GrpcServerInterceptor()));
+
+    if (secConf.isSecurityEnabled()) {
+      try {
+        SslContextBuilder sslContextBuilder = SslContextBuilder.forServer(
+            caClient.getPrivateKey(), caClient.getCertificate());
+
+        sslContextBuilder = GrpcSslContexts.configure(
+            sslContextBuilder, secConf.getGrpcSslProvider());
+
+        sslContextBuilder.clientAuth(ClientAuth.REQUIRE);
+        sslContextBuilder.trustManager(caClient.getCACertificate());
+
+        nettyServerBuilder.sslContext(sslContextBuilder.build());
+      } catch (SSLException ex) {
+        throw new IllegalArgumentException(
+            "Unable to setup TLS for secure datanode replication GRPC "
+                + "endpoint.", ex);
+      }
+    }
+
+    server = nettyServerBuilder.build();
+  }
+
+  public void start() throws IOException {
+    server.start();
+
+    if (port == 0) {
+      LOG.info("{} is started using port {}", getClass().getSimpleName(),
+          server.getPort());
+    }
+
+    port = server.getPort();
+
+  }
+
+  public void stop() {
+    try {
+      server.shutdown().awaitTermination(10L, TimeUnit.SECONDS);
+    } catch (InterruptedException ex) {
+      LOG.warn("{} couldn't be stopped gracefully", getClass().getSimpleName());
+    }
+  }
+
+  public int getPort() {
+    return port;
+  }
+
+  @ConfigGroup(prefix = "hdds.datanode.replication")
+  public static final class ReplicationConfig {
+
+    @Config(key = "port", defaultValue = "9886", description = "Port used for"
+        + " the server2server replication server", tags = {
+        ConfigTag.MANAGEMENT})
+    private int port;
+
+    public int getPort() {
+      return port;
+    }
+
+    public ReplicationConfig setPort(int portParam) {
+      this.port = portParam;
+      return this;
+    }
+  }
+
+}
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ReplicationSupervisor.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ReplicationSupervisor.java
index cb281f0..6becf62 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ReplicationSupervisor.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ReplicationSupervisor.java
@@ -25,11 +25,11 @@
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicLong;
 
-import com.google.common.util.concurrent.ThreadFactoryBuilder;
 import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
 import org.apache.hadoop.ozone.container.replication.ReplicationTask.Status;
 
 import com.google.common.annotations.VisibleForTesting;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -44,6 +44,7 @@
   private final ContainerSet containerSet;
   private final ContainerReplicator replicator;
   private final ExecutorService executor;
+
   private final AtomicLong requestCounter = new AtomicLong();
   private final AtomicLong successCounter = new AtomicLong();
   private final AtomicLong failureCounter = new AtomicLong();
@@ -58,7 +59,8 @@
   @VisibleForTesting
   ReplicationSupervisor(
       ContainerSet containerSet, ContainerReplicator replicator,
-      ExecutorService executor) {
+      ExecutorService executor
+  ) {
     this.containerSet = containerSet;
     this.replicator = replicator;
     this.containersInFlight = ConcurrentHashMap.newKeySet();
@@ -67,9 +69,10 @@
 
   public ReplicationSupervisor(
       ContainerSet containerSet,
-      ContainerReplicator replicator, int poolSize) {
+      ContainerReplicator replicator, int poolSize
+  ) {
     this(containerSet, replicator, new ThreadPoolExecutor(
-        0, poolSize, 60, TimeUnit.SECONDS,
+        poolSize, poolSize, 60, TimeUnit.SECONDS,
         new LinkedBlockingQueue<>(),
         new ThreadFactoryBuilder().setDaemon(true)
             .setNameFormat("ContainerReplicationThread-%d")
@@ -85,6 +88,12 @@
     }
   }
 
+  @VisibleForTesting
+  public void shutdownAfterFinish() throws InterruptedException {
+    executor.shutdown();
+    executor.awaitTermination(1L, TimeUnit.DAYS);
+  }
+
   public void stop() {
     try {
       executor.shutdown();
@@ -100,6 +109,7 @@
   /**
    * Get the number of containers currently being downloaded
    * or scheduled for download.
+   *
    * @return Count of in-flight replications.
    */
   @VisibleForTesting
@@ -107,10 +117,10 @@
     return containersInFlight.size();
   }
 
-  private final class TaskRunner implements Runnable {
+  public final class TaskRunner implements Runnable {
     private final ReplicationTask task;
 
-    private TaskRunner(ReplicationTask task) {
+    public TaskRunner(ReplicationTask task) {
       this.task = task;
     }
 
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/SimpleContainerDownloader.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/SimpleContainerDownloader.java
index 5d8a86b..0967503 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/SimpleContainerDownloader.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/SimpleContainerDownloader.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.ozone.container.replication;
 
+import java.io.IOException;
 import java.nio.file.Path;
 import java.nio.file.Paths;
 import java.security.cert.X509Certificate;
@@ -87,6 +88,7 @@
         if (result == null) {
           result = downloadContainer(containerId, datanode);
         } else {
+
           result = result.exceptionally(t -> {
             LOG.error("Error on replicating container: " + containerId, t);
             try {
@@ -128,11 +130,11 @@
   protected CompletableFuture<Path> downloadContainer(
       long containerId,
       DatanodeDetails datanode
-  ) throws Exception {
+  ) throws IOException {
     CompletableFuture<Path> result;
     GrpcReplicationClient grpcReplicationClient =
         new GrpcReplicationClient(datanode.getIpAddress(),
-            datanode.getPort(Name.STANDALONE).getValue(),
+            datanode.getPort(Name.REPLICATION).getValue(),
             workingDirectory, securityConfig, caCert);
     result = grpcReplicationClient.download(containerId)
         .thenApply(r -> {
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SetNodeOperationalStateCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SetNodeOperationalStateCommand.java
new file mode 100644
index 0000000..3ff7949
--- /dev/null
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SetNodeOperationalStateCommand.java
@@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.protocol.commands;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.protocol.proto
+    .StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.hdds.protocol.proto
+    .StorageContainerDatanodeProtocolProtos.SetNodeOperationalStateCommandProto;
+
+/**
+ * A command used to persist the current node operational state on the datanode.
+ */
+public class SetNodeOperationalStateCommand
+    extends SCMCommand<SetNodeOperationalStateCommandProto> {
+
+  private final HddsProtos.NodeOperationalState opState;
+  private long stateExpiryEpochSeconds;
+
+  /**
+   * Ctor that creates a SetNodeOperationalStateCommand.
+   *
+   * @param id    - Command ID. Something a time stamp would suffice.
+   * @param state - OperationalState that want the node to be set into.
+   * @param stateExpiryEpochSeconds The epoch time when the state should
+   *                                expire, or zero for the state to remain
+   *                                indefinitely.
+   */
+  public SetNodeOperationalStateCommand(long id,
+      HddsProtos.NodeOperationalState state, long stateExpiryEpochSeconds) {
+    super(id);
+    this.opState = state;
+    this.stateExpiryEpochSeconds = stateExpiryEpochSeconds;
+  }
+
+  /**
+   * Returns the type of this command.
+   *
+   * @return Type  - This is setNodeOperationalStateCommand.
+   */
+  @Override
+  public SCMCommandProto.Type getType() {
+    return SCMCommandProto.Type.setNodeOperationalStateCommand;
+  }
+
+  /**
+   * Gets the protobuf message of this object.
+   *
+   * @return A protobuf message.
+   */
+  @Override
+  public SetNodeOperationalStateCommandProto getProto() {
+    return SetNodeOperationalStateCommandProto.newBuilder()
+        .setCmdId(getId())
+        .setNodeOperationalState(opState)
+        .setStateExpiryEpochSeconds(stateExpiryEpochSeconds).build();
+  }
+
+  public HddsProtos.NodeOperationalState getOpState() {
+    return opState;
+  }
+
+  public long getStateExpiryEpochSeconds() {
+    return stateExpiryEpochSeconds;
+  }
+
+  public static SetNodeOperationalStateCommand getFromProtobuf(
+      SetNodeOperationalStateCommandProto cmdProto) {
+    Preconditions.checkNotNull(cmdProto);
+    return new SetNodeOperationalStateCommand(cmdProto.getCmdId(),
+        cmdProto.getNodeOperationalState(),
+        cmdProto.getStateExpiryEpochSeconds());
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
index 24c598a..40cfbba 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
@@ -44,13 +44,11 @@
 import org.apache.hadoop.ozone.common.OzoneChecksumException;
 import org.apache.hadoop.ozone.container.common.helpers.BlockData;
 import org.apache.hadoop.ozone.container.common.helpers.ChunkInfo;
-import org.apache.hadoop.ozone.container.common.transport.server.XceiverServerSpi;
 import org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis;
 import org.apache.hadoop.security.token.Token;
 
 import com.google.common.base.Preconditions;
 import com.google.common.base.Strings;
-import org.apache.ratis.protocol.RaftGroupId;
 import org.apache.ratis.server.RaftServer;
 import org.apache.ratis.statemachine.StateMachine;
 import org.junit.Assert;
@@ -589,14 +587,12 @@
           " not exist in datanode:" + dn.getDatanodeDetails().getUuid());
     }
 
-    XceiverServerSpi serverSpi = dn.getDatanodeStateMachine().
-        getContainer().getWriteChannel();
-    RaftServer server = (((XceiverServerRatis) serverSpi).getServer());
-    RaftGroupId groupId =
-        pipeline == null ? server.getGroupIds().iterator().next() :
-            RatisHelper.newRaftGroup(pipeline).getGroupId();
-
-    return server.getDivision(groupId);
+    XceiverServerRatis server =
+        (XceiverServerRatis) (dn.getDatanodeStateMachine().
+        getContainer().getWriteChannel());
+    return pipeline == null ? server.getServerDivision() :
+        server.getServerDivision(
+            RatisHelper.newRaftGroup(pipeline).getGroupId());
   }
 
   public static StateMachine getStateMachine(HddsDatanodeService dn,
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
index 2eb6a39..96d4228 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
@@ -21,9 +21,10 @@
 import java.io.File;
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.util.UUID;
+import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
-import java.util.UUID;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 
@@ -34,9 +35,13 @@
 import org.apache.hadoop.hdds.conf.MutableConfigurationSource;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.utils.BackgroundService;
 import org.apache.hadoop.hdds.utils.MetadataKeyFilters;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.hdds.utils.db.TableIterator;
 import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.common.Checksum;
 import org.apache.hadoop.ozone.common.ChunkBuffer;
@@ -55,7 +60,6 @@
 import org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet;
 import org.apache.hadoop.ozone.container.common.volume.RoundRobinVolumeChoosingPolicy;
 import org.apache.hadoop.ozone.container.common.volume.VolumeSet;
-import org.apache.hadoop.ozone.container.keyvalue.ChunkLayoutTestInfo;
 import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer;
 import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
 import org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler;
@@ -64,11 +68,14 @@
 import org.apache.hadoop.ozone.container.keyvalue.impl.FilePerChunkStrategy;
 import org.apache.hadoop.ozone.container.keyvalue.interfaces.ChunkManager;
 import org.apache.hadoop.ozone.container.keyvalue.statemachine.background.BlockDeletingService;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStore;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStoreSchemaTwoImpl;
 import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
 import org.apache.hadoop.ozone.container.testutils.BlockDeletingServiceTestImpl;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.GenericTestUtils.LogCapturer;
 
+import static java.util.stream.Collectors.toList;
 import static org.apache.commons.lang3.RandomStringUtils.randomAlphanumeric;
 
 import org.junit.AfterClass;
@@ -82,7 +89,11 @@
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER_DEFAULT;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_INTERVAL;
+import static org.apache.hadoop.ozone.OzoneConsts.SCHEMA_VERSIONS;
+import static org.apache.hadoop.ozone.OzoneConsts.SCHEMA_V1;
+import static org.apache.hadoop.ozone.OzoneConsts.SCHEMA_V2;
 import static org.apache.hadoop.ozone.container.common.impl.ChunkLayOutVersion.FILE_PER_BLOCK;
+import static org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.LOG;
 import static org.mockito.ArgumentMatchers.any;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
@@ -101,16 +112,38 @@
   private static MutableConfigurationSource conf;
 
   private final ChunkLayOutVersion layout;
+  private final String schemaVersion;
   private int blockLimitPerTask;
   private static VolumeSet volumeSet;
 
-  public TestBlockDeletingService(ChunkLayOutVersion layout) {
-    this.layout = layout;
+  public TestBlockDeletingService(LayoutInfo layoutInfo) {
+    this.layout = layoutInfo.layout;
+    this.schemaVersion = layoutInfo.schemaVersion;
   }
 
   @Parameterized.Parameters
   public static Iterable<Object[]> parameters() {
-    return ChunkLayoutTestInfo.chunkLayoutParameters();
+    return LayoutInfo.layoutList.stream().map(each -> new Object[] {each})
+        .collect(toList());
+  }
+
+  public static class LayoutInfo {
+    private final String schemaVersion;
+    private final ChunkLayOutVersion layout;
+
+    public LayoutInfo(String schemaVersion, ChunkLayOutVersion layout) {
+      this.schemaVersion = schemaVersion;
+      this.layout = layout;
+    }
+
+    private static List<LayoutInfo> layoutList = new ArrayList<>();
+    static {
+      for (ChunkLayOutVersion ch : ChunkLayOutVersion.getAllVersions()) {
+        for (String sch : SCHEMA_VERSIONS) {
+          layoutList.add(new LayoutInfo(sch, ch));
+        }
+      }
+    }
   }
 
   @BeforeClass
@@ -158,6 +191,7 @@
     }
     byte[] arr = randomAlphanumeric(1048576).getBytes(UTF_8);
     ChunkBuffer buffer = ChunkBuffer.wrap(ByteBuffer.wrap(arr));
+    int txnID = 0;
     for (int x = 0; x < numOfContainers; x++) {
       long containerID = ContainerTestHelper.getTestContainerID();
       KeyValueContainerData data =
@@ -165,58 +199,167 @@
               ContainerTestHelper.CONTAINER_MAX_SIZE,
               UUID.randomUUID().toString(), datanodeUuid);
       data.closeContainer();
+      data.setSchemaVersion(schemaVersion);
       KeyValueContainer container = new KeyValueContainer(data, conf);
       container.create(volumeSet,
           new RoundRobinVolumeChoosingPolicy(), scmId);
       containerSet.addContainer(container);
       data = (KeyValueContainerData) containerSet.getContainer(
           containerID).getContainerData();
-      long chunkLength = 100;
-      try(ReferenceCountedDB metadata = BlockUtils.getDB(data, conf)) {
-        for (int j = 0; j < numOfBlocksPerContainer; j++) {
-          BlockID blockID =
-              ContainerTestHelper.getTestBlockID(containerID);
-          String deleteStateName = OzoneConsts.DELETING_KEY_PREFIX +
-              blockID.getLocalID();
-          BlockData kd = new BlockData(blockID);
-          List<ContainerProtos.ChunkInfo> chunks = Lists.newArrayList();
-          for (int k = 0; k < numOfChunksPerBlock; k++) {
-            final String chunkName = String.format("block.%d.chunk.%d", j, k);
-            final long offset = k * chunkLength;
-            ContainerProtos.ChunkInfo info =
-                ContainerProtos.ChunkInfo.newBuilder()
-                    .setChunkName(chunkName)
-                    .setLen(chunkLength)
-                    .setOffset(offset)
-                    .setChecksumData(Checksum.getNoChecksumDataProto())
-                    .build();
-            chunks.add(info);
-            ChunkInfo chunkInfo = new ChunkInfo(chunkName, offset, chunkLength);
-            ChunkBuffer chunkData = buffer.duplicate(0, (int) chunkLength);
-            chunkManager.writeChunk(container, blockID, chunkInfo, chunkData,
-                WRITE_STAGE);
-            chunkManager.writeChunk(container, blockID, chunkInfo, chunkData,
-                COMMIT_STAGE);
-          }
-          kd.setChunks(chunks);
-          metadata.getStore().getBlockDataTable().put(
-                  deleteStateName, kd);
-          container.getContainerData().incrPendingDeletionBlocks(1);
-        }
-        container.getContainerData().setKeyCount(numOfBlocksPerContainer);
-        // Set block count, bytes used and pending delete block count.
-        metadata.getStore().getMetadataTable().put(
-                OzoneConsts.BLOCK_COUNT, (long)numOfBlocksPerContainer);
-        metadata.getStore().getMetadataTable().put(
-                OzoneConsts.CONTAINER_BYTES_USED,
-            chunkLength * numOfChunksPerBlock * numOfBlocksPerContainer);
-        metadata.getStore().getMetadataTable().put(
-                OzoneConsts.PENDING_DELETE_BLOCK_COUNT,
-                (long)numOfBlocksPerContainer);
+      if (data.getSchemaVersion().equals(SCHEMA_V1)) {
+        createPendingDeleteBlocksSchema1(numOfBlocksPerContainer, data,
+            containerID, numOfChunksPerBlock, buffer, chunkManager, container);
+      } else if (data.getSchemaVersion().equals(SCHEMA_V2)) {
+        createPendingDeleteBlocksSchema2(numOfBlocksPerContainer, txnID,
+            containerID, numOfChunksPerBlock, buffer, chunkManager, container,
+            data);
+      } else {
+        throw new UnsupportedOperationException(
+            "Only schema version 1 and schema version 2 are "
+                + "supported.");
       }
     }
   }
 
+  @SuppressWarnings("checkstyle:parameternumber")
+  private void createPendingDeleteBlocksSchema1(int numOfBlocksPerContainer,
+      KeyValueContainerData data, long containerID, int numOfChunksPerBlock,
+      ChunkBuffer buffer, ChunkManager chunkManager,
+      KeyValueContainer container) {
+    BlockID blockID = null;
+    try (ReferenceCountedDB metadata = BlockUtils.getDB(data, conf)) {
+      for (int j = 0; j < numOfBlocksPerContainer; j++) {
+        blockID = ContainerTestHelper.getTestBlockID(containerID);
+        String deleteStateName =
+            OzoneConsts.DELETING_KEY_PREFIX + blockID.getLocalID();
+        BlockData kd = new BlockData(blockID);
+        List<ContainerProtos.ChunkInfo> chunks = Lists.newArrayList();
+        putChunksInBlock(numOfChunksPerBlock, j, chunks, buffer, chunkManager,
+            container, blockID);
+        kd.setChunks(chunks);
+        metadata.getStore().getBlockDataTable().put(deleteStateName, kd);
+        container.getContainerData().incrPendingDeletionBlocks(1);
+      }
+      updateMetaData(data, container, numOfBlocksPerContainer,
+          numOfChunksPerBlock);
+    } catch (IOException exception) {
+      LOG.info("Exception " + exception);
+      LOG.warn("Failed to put block: " + blockID + " in BlockDataTable.");
+    }
+  }
+
+  @SuppressWarnings("checkstyle:parameternumber")
+  private void createPendingDeleteBlocksSchema2(int numOfBlocksPerContainer,
+      int txnID, long containerID, int numOfChunksPerBlock, ChunkBuffer buffer,
+      ChunkManager chunkManager, KeyValueContainer container,
+      KeyValueContainerData data) {
+    List<Long> containerBlocks = new ArrayList<>();
+    int blockCount = 0;
+    for (int i = 0; i < numOfBlocksPerContainer; i++) {
+      txnID = txnID + 1;
+      BlockID blockID = ContainerTestHelper.getTestBlockID(containerID);
+      BlockData kd = new BlockData(blockID);
+      List<ContainerProtos.ChunkInfo> chunks = Lists.newArrayList();
+      putChunksInBlock(numOfChunksPerBlock, i, chunks, buffer, chunkManager,
+          container, blockID);
+      kd.setChunks(chunks);
+      String bID = null;
+      try (ReferenceCountedDB metadata = BlockUtils.getDB(data, conf)) {
+        bID = blockID.getLocalID() + "";
+        metadata.getStore().getBlockDataTable().put(bID, kd);
+      } catch (IOException exception) {
+        LOG.info("Exception = " + exception);
+        LOG.warn("Failed to put block: " + bID + " in BlockDataTable.");
+      }
+      container.getContainerData().incrPendingDeletionBlocks(1);
+
+      // In below if statements we are checking if a single container
+      // consists of more blocks than 'blockLimitPerTask' then we create
+      // (totalBlocksInContainer / blockLimitPerTask) transactions which
+      // consists of blocks equal to blockLimitPerTask and last transaction
+      // consists of blocks equal to
+      // (totalBlocksInContainer % blockLimitPerTask).
+      containerBlocks.add(blockID.getLocalID());
+      blockCount++;
+      if (blockCount == blockLimitPerTask || i == (numOfBlocksPerContainer
+          - 1)) {
+        createTxn(data, containerBlocks, txnID, containerID);
+        containerBlocks.clear();
+        blockCount = 0;
+      }
+    }
+    updateMetaData(data, container, numOfBlocksPerContainer,
+        numOfChunksPerBlock);
+  }
+
+  private void createTxn(KeyValueContainerData data, List<Long> containerBlocks,
+      int txnID, long containerID) {
+    try (ReferenceCountedDB metadata = BlockUtils.getDB(data, conf)) {
+      StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction dtx =
+          StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction
+              .newBuilder().setTxID(txnID).setContainerID(containerID)
+              .addAllLocalID(containerBlocks).setCount(0).build();
+      try (BatchOperation batch = metadata.getStore().getBatchHandler()
+          .initBatchOperation()) {
+        DatanodeStore ds = metadata.getStore();
+        DatanodeStoreSchemaTwoImpl dnStoreTwoImpl =
+            (DatanodeStoreSchemaTwoImpl) ds;
+        dnStoreTwoImpl.getDeleteTransactionTable()
+            .putWithBatch(batch, (long) txnID, dtx);
+        metadata.getStore().getBatchHandler().commitBatchOperation(batch);
+      }
+    } catch (IOException exception) {
+      LOG.warn("Transaction creation was not successful for txnID: " + txnID
+          + " consisting of " + containerBlocks.size() + " blocks.");
+    }
+  }
+
+  private void putChunksInBlock(int numOfChunksPerBlock, int i,
+      List<ContainerProtos.ChunkInfo> chunks, ChunkBuffer buffer,
+      ChunkManager chunkManager, KeyValueContainer container, BlockID blockID) {
+    long chunkLength = 100;
+    try {
+      for (int k = 0; k < numOfChunksPerBlock; k++) {
+        final String chunkName = String.format("block.%d.chunk.%d", i, k);
+        final long offset = k * chunkLength;
+        ContainerProtos.ChunkInfo info =
+            ContainerProtos.ChunkInfo.newBuilder().setChunkName(chunkName)
+                .setLen(chunkLength).setOffset(offset)
+                .setChecksumData(Checksum.getNoChecksumDataProto()).build();
+        chunks.add(info);
+        ChunkInfo chunkInfo = new ChunkInfo(chunkName, offset, chunkLength);
+        ChunkBuffer chunkData = buffer.duplicate(0, (int) chunkLength);
+        chunkManager
+            .writeChunk(container, blockID, chunkInfo, chunkData, WRITE_STAGE);
+        chunkManager
+            .writeChunk(container, blockID, chunkInfo, chunkData, COMMIT_STAGE);
+      }
+    } catch (IOException ex) {
+      LOG.warn("Putting chunks in blocks was not successful for BlockID: "
+          + blockID);
+    }
+  }
+
+  private void updateMetaData(KeyValueContainerData data,
+      KeyValueContainer container, int numOfBlocksPerContainer,
+      int numOfChunksPerBlock) {
+    long chunkLength = 100;
+    try (ReferenceCountedDB metadata = BlockUtils.getDB(data, conf)) {
+      container.getContainerData().setKeyCount(numOfBlocksPerContainer);
+      // Set block count, bytes used and pending delete block count.
+      metadata.getStore().getMetadataTable()
+          .put(OzoneConsts.BLOCK_COUNT, (long) numOfBlocksPerContainer);
+      metadata.getStore().getMetadataTable()
+          .put(OzoneConsts.CONTAINER_BYTES_USED,
+              chunkLength * numOfChunksPerBlock * numOfBlocksPerContainer);
+      metadata.getStore().getMetadataTable()
+          .put(OzoneConsts.PENDING_DELETE_BLOCK_COUNT,
+              (long) numOfBlocksPerContainer);
+    } catch (IOException exception) {
+      LOG.warn("Meta Data update was not successful for container: "+container);
+    }
+  }
+
   /**
    *  Run service runDeletingTasks and wait for it's been processed.
    */
@@ -231,11 +374,32 @@
    * Get under deletion blocks count from DB,
    * note this info is parsed from container.db.
    */
-  private int getUnderDeletionBlocksCount(ReferenceCountedDB meta)
-      throws IOException {
-    return meta.getStore().getBlockDataTable()
-        .getRangeKVs(null, 100,
-        MetadataKeyFilters.getDeletingKeyFilter()).size();
+  private int getUnderDeletionBlocksCount(ReferenceCountedDB meta,
+      KeyValueContainerData data) throws IOException {
+    if (data.getSchemaVersion().equals(SCHEMA_V1)) {
+      return meta.getStore().getBlockDataTable()
+          .getRangeKVs(null, 100, MetadataKeyFilters.getDeletingKeyFilter())
+          .size();
+    } else if (data.getSchemaVersion().equals(SCHEMA_V2)) {
+      int pendingBlocks = 0;
+      DatanodeStore ds = meta.getStore();
+      DatanodeStoreSchemaTwoImpl dnStoreTwoImpl =
+          (DatanodeStoreSchemaTwoImpl) ds;
+      try (
+          TableIterator<Long, ? extends Table.KeyValue<Long, 
+              StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction>> 
+              iter = dnStoreTwoImpl.getDeleteTransactionTable().iterator()) {
+        while (iter.hasNext()) {
+          StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction
+              delTx = iter.next().getValue();
+          pendingBlocks += delTx.getLocalIDList().size();
+        }
+      }
+      return pendingBlocks;
+    } else {
+      throw new UnsupportedOperationException(
+          "Only schema version 1 and schema version 2 are supported.");
+    }
   }
 
 
@@ -261,6 +425,7 @@
     // Ensure 1 container was created
     List<ContainerData> containerData = Lists.newArrayList();
     containerSet.listContainer(0L, 1, containerData);
+    KeyValueContainerData data = (KeyValueContainerData) containerData.get(0);
     Assert.assertEquals(1, containerData.size());
 
     try(ReferenceCountedDB meta = BlockUtils.getDB(
@@ -280,7 +445,7 @@
       Assert.assertEquals(0, transactionId);
 
       // Ensure there are 3 blocks under deletion and 0 deleted blocks
-      Assert.assertEquals(3, getUnderDeletionBlocksCount(meta));
+      Assert.assertEquals(3, getUnderDeletionBlocksCount(meta, data));
       Assert.assertEquals(3, meta.getStore().getMetadataTable()
           .get(OzoneConsts.PENDING_DELETE_BLOCK_COUNT).longValue());
 
@@ -348,6 +513,9 @@
   public void testBlockDeletionTimeout() throws Exception {
     conf.setInt(OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL, 10);
     conf.setInt(OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER, 2);
+    this.blockLimitPerTask =
+        conf.getInt(OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER,
+            OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER_DEFAULT);
     ContainerSet containerSet = new ContainerSet();
     createToDeleteBlocks(containerSet, 1, 3, 1);
     ContainerMetrics metrics = ContainerMetrics.create(conf);
@@ -394,7 +562,7 @@
       LogCapturer newLog = LogCapturer.captureLogs(BackgroundService.LOG);
       GenericTestUtils.waitFor(() -> {
         try {
-          return getUnderDeletionBlocksCount(meta) == 0;
+          return getUnderDeletionBlocksCount(meta, data) == 0;
         } catch (IOException ignored) {
         }
         return false;
@@ -445,6 +613,9 @@
         TopNOrderedContainerDeletionChoosingPolicy.class.getName());
     conf.setInt(OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL, 1);
     conf.setInt(OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER, 1);
+    this.blockLimitPerTask =
+        conf.getInt(OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER,
+            OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER_DEFAULT);
     ContainerSet containerSet = new ContainerSet();
 
     int containerCount = 2;
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/TestStateContext.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/TestStateContext.java
index d3032c3..e9c39d3 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/TestStateContext.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/TestStateContext.java
@@ -23,11 +23,19 @@
 import static org.apache.hadoop.test.GenericTestUtils.waitFor;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
 
 import java.net.InetSocketAddress;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.Executors;
 import java.util.concurrent.ExecutorService;
@@ -37,6 +45,7 @@
 import java.util.concurrent.atomic.AtomicInteger;
 
 import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import com.google.protobuf.Descriptors.Descriptor;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerAction;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.PipelineAction;
@@ -53,6 +62,271 @@
  */
 public class TestStateContext {
 
+  /**
+   * Only accepted types of reports can be put back to the report queue.
+   */
+  @Test
+  public void testPutBackReports() {
+    OzoneConfiguration conf = new OzoneConfiguration();
+    DatanodeStateMachine datanodeStateMachineMock =
+        mock(DatanodeStateMachine.class);
+
+    StateContext ctx = new StateContext(conf, DatanodeStates.getInitState(),
+        datanodeStateMachineMock);
+    InetSocketAddress scm1 = new InetSocketAddress("scm1", 9001);
+    ctx.addEndpoint(scm1);
+    InetSocketAddress scm2 = new InetSocketAddress("scm2", 9001);
+    ctx.addEndpoint(scm2);
+
+    Map<String, Integer> expectedReportCount = new HashMap<>();
+
+    // Case 1: Put back an incremental report
+
+    ctx.putBackReports(Collections.singletonList(newMockReport(
+        StateContext.COMMAND_STATUS_REPORTS_PROTO_NAME)), scm1);
+    // scm2 report queue should be empty
+    checkReportCount(ctx.getAllAvailableReports(scm2), expectedReportCount);
+    // Check scm1 queue
+    expectedReportCount.put(
+        StateContext.COMMAND_STATUS_REPORTS_PROTO_NAME, 1);
+    checkReportCount(ctx.getAllAvailableReports(scm1), expectedReportCount);
+    // getReports dequeues incremental reports
+    expectedReportCount.clear();
+
+    ctx.putBackReports(Collections.singletonList(newMockReport(
+        StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME)), scm2);
+    // scm1 report queue should be empty
+    checkReportCount(ctx.getAllAvailableReports(scm1), expectedReportCount);
+    // Check scm2 queue
+    expectedReportCount.put(
+        StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME, 1);
+    checkReportCount(ctx.getAllAvailableReports(scm2), expectedReportCount);
+    // getReports dequeues incremental reports
+    expectedReportCount.clear();
+
+    // Case 2: Attempt to put back a full report
+
+    try {
+      ctx.putBackReports(Collections.singletonList(
+          newMockReport(StateContext.CONTAINER_REPORTS_PROTO_NAME)), scm1);
+      fail("Should throw exception when putting back unaccepted reports!");
+    } catch (IllegalArgumentException ignored) {
+    }
+    try {
+      ctx.putBackReports(Collections.singletonList(
+          newMockReport(StateContext.NODE_REPORT_PROTO_NAME)), scm2);
+      fail("Should throw exception when putting back unaccepted reports!");
+    } catch (IllegalArgumentException ignored) {
+    }
+    try {
+      ctx.putBackReports(Collections.singletonList(
+          newMockReport(StateContext.PIPELINE_REPORTS_PROTO_NAME)), scm1);
+      fail("Should throw exception when putting back unaccepted reports!");
+    } catch (IllegalArgumentException ignored) {
+    }
+
+    // Case 3: Put back mixed types of incremental reports
+
+    ctx.putBackReports(Arrays.asList(
+        newMockReport(StateContext.COMMAND_STATUS_REPORTS_PROTO_NAME),
+        newMockReport(StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME),
+        newMockReport(StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME),
+        newMockReport(StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME),
+        newMockReport(StateContext.COMMAND_STATUS_REPORTS_PROTO_NAME)
+    ), scm1);
+    // scm2 report queue should be empty
+    checkReportCount(ctx.getAllAvailableReports(scm2), expectedReportCount);
+    // Check scm1 queue
+    expectedReportCount.put(
+        StateContext.COMMAND_STATUS_REPORTS_PROTO_NAME, 2);
+    expectedReportCount.put(
+        StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME, 3);
+    checkReportCount(ctx.getAllAvailableReports(scm1), expectedReportCount);
+    // getReports dequeues incremental reports
+    expectedReportCount.clear();
+
+    // Case 4: Attempt to put back mixed types of full reports
+
+    try {
+      ctx.putBackReports(Arrays.asList(
+          newMockReport(StateContext.CONTAINER_REPORTS_PROTO_NAME),
+          newMockReport(StateContext.NODE_REPORT_PROTO_NAME),
+          newMockReport(StateContext.PIPELINE_REPORTS_PROTO_NAME)
+      ), scm1);
+      fail("Should throw exception when putting back unaccepted reports!");
+    } catch (IllegalArgumentException ignored) {
+    }
+
+    // Case 5: Attempt to put back mixed full and incremental reports
+
+    try {
+      ctx.putBackReports(Arrays.asList(
+          newMockReport(StateContext.CONTAINER_REPORTS_PROTO_NAME),
+          newMockReport(StateContext.COMMAND_STATUS_REPORTS_PROTO_NAME),
+          newMockReport(StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME)
+      ), scm2);
+      fail("Should throw exception when putting back unaccepted reports!");
+    } catch (IllegalArgumentException ignored) {
+    }
+  }
+
+  @Test
+  public void testReportQueueWithAddReports() {
+    OzoneConfiguration conf = new OzoneConfiguration();
+    DatanodeStateMachine datanodeStateMachineMock =
+        mock(DatanodeStateMachine.class);
+
+    StateContext ctx = new StateContext(conf, DatanodeStates.getInitState(),
+        datanodeStateMachineMock);
+    InetSocketAddress scm1 = new InetSocketAddress("scm1", 9001);
+    ctx.addEndpoint(scm1);
+    InetSocketAddress scm2 = new InetSocketAddress("scm2", 9001);
+    ctx.addEndpoint(scm2);
+    // Check initial state
+    assertEquals(0, ctx.getAllAvailableReports(scm1).size());
+    assertEquals(0, ctx.getAllAvailableReports(scm2).size());
+
+    Map<String, Integer> expectedReportCount = new HashMap<>();
+
+    // Add a bunch of ContainerReports
+    batchAddReports(ctx, StateContext.CONTAINER_REPORTS_PROTO_NAME, 128);
+    // Should only keep the latest one
+    expectedReportCount.put(StateContext.CONTAINER_REPORTS_PROTO_NAME, 1);
+    checkReportCount(ctx.getAllAvailableReports(scm1), expectedReportCount);
+    checkReportCount(ctx.getAllAvailableReports(scm2), expectedReportCount);
+
+    // Add a bunch of NodeReport
+    batchAddReports(ctx, StateContext.NODE_REPORT_PROTO_NAME, 128);
+    // Should only keep the latest one
+    expectedReportCount.put(StateContext.NODE_REPORT_PROTO_NAME, 1);
+    checkReportCount(ctx.getAllAvailableReports(scm1), expectedReportCount);
+    checkReportCount(ctx.getAllAvailableReports(scm2), expectedReportCount);
+
+    // Add a bunch of PipelineReports
+    batchAddReports(ctx, StateContext.PIPELINE_REPORTS_PROTO_NAME, 128);
+    // Should only keep the latest one
+    expectedReportCount.put(StateContext.PIPELINE_REPORTS_PROTO_NAME, 1);
+    checkReportCount(ctx.getAllAvailableReports(scm1), expectedReportCount);
+    checkReportCount(ctx.getAllAvailableReports(scm2), expectedReportCount);
+
+    // Add a bunch of PipelineReports
+    batchAddReports(ctx, StateContext.PIPELINE_REPORTS_PROTO_NAME, 128);
+    // Should only keep the latest one
+    expectedReportCount.put(StateContext.PIPELINE_REPORTS_PROTO_NAME, 1);
+    checkReportCount(ctx.getAllAvailableReports(scm1), expectedReportCount);
+    checkReportCount(ctx.getAllAvailableReports(scm2), expectedReportCount);
+
+    // Add a bunch of CommandStatusReports
+    batchAddReports(ctx,
+        StateContext.COMMAND_STATUS_REPORTS_PROTO_NAME, 128);
+    expectedReportCount.put(
+        StateContext.COMMAND_STATUS_REPORTS_PROTO_NAME, 128);
+    // Should keep all of them
+    checkReportCount(ctx.getAllAvailableReports(scm1), expectedReportCount);
+    checkReportCount(ctx.getAllAvailableReports(scm2), expectedReportCount);
+    // getReports dequeues incremental reports
+    expectedReportCount.remove(
+        StateContext.COMMAND_STATUS_REPORTS_PROTO_NAME);
+
+    // Add a bunch of IncrementalContainerReport
+    batchAddReports(ctx,
+        StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME, 128);
+    // Should keep all of them
+    expectedReportCount.put(
+        StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME, 128);
+    checkReportCount(ctx.getAllAvailableReports(scm1), expectedReportCount);
+    checkReportCount(ctx.getAllAvailableReports(scm2), expectedReportCount);
+    // getReports dequeues incremental reports
+    expectedReportCount.remove(
+        StateContext.INCREMENTAL_CONTAINER_REPORT_PROTO_NAME);
+  }
+
+  void batchAddReports(StateContext ctx, String reportName, int count) {
+    for (int i = 0; i < count; i++) {
+      ctx.addReport(newMockReport(reportName));
+    }
+  }
+
+  void checkReportCount(List<GeneratedMessage> reports,
+      Map<String, Integer> expectedReportCount) {
+    Map<String, Integer> reportCount = new HashMap<>();
+    for (GeneratedMessage report : reports) {
+      final String reportName = report.getDescriptorForType().getFullName();
+      reportCount.put(reportName, reportCount.getOrDefault(reportName, 0) + 1);
+    }
+    // Verify
+    assertEquals(expectedReportCount, reportCount);
+  }
+
+  /**
+   * Check if Container, Node and Pipeline report APIs work as expected.
+   */
+  @Test
+  public void testContainerNodePipelineReportAPIs() {
+    OzoneConfiguration conf = new OzoneConfiguration();
+    DatanodeStateMachine datanodeStateMachineMock =
+        mock(DatanodeStateMachine.class);
+
+    // ContainerReports
+    StateContext context1 = newStateContext(conf, datanodeStateMachineMock);
+    assertNull(context1.getContainerReports());
+    assertNull(context1.getNodeReport());
+    assertNull(context1.getPipelineReports());
+    GeneratedMessage containerReports =
+        newMockReport(StateContext.CONTAINER_REPORTS_PROTO_NAME);
+    context1.addReport(containerReports);
+
+    assertNotNull(context1.getContainerReports());
+    assertEquals(StateContext.CONTAINER_REPORTS_PROTO_NAME,
+        context1.getContainerReports().getDescriptorForType().getFullName());
+    assertNull(context1.getNodeReport());
+    assertNull(context1.getPipelineReports());
+
+    // NodeReport
+    StateContext context2 = newStateContext(conf, datanodeStateMachineMock);
+    GeneratedMessage nodeReport =
+        newMockReport(StateContext.NODE_REPORT_PROTO_NAME);
+    context2.addReport(nodeReport);
+
+    assertNull(context2.getContainerReports());
+    assertNotNull(context2.getNodeReport());
+    assertEquals(StateContext.NODE_REPORT_PROTO_NAME,
+        context2.getNodeReport().getDescriptorForType().getFullName());
+    assertNull(context2.getPipelineReports());
+
+    // PipelineReports
+    StateContext context3 = newStateContext(conf, datanodeStateMachineMock);
+    GeneratedMessage pipelineReports =
+        newMockReport(StateContext.PIPELINE_REPORTS_PROTO_NAME);
+    context3.addReport(pipelineReports);
+
+    assertNull(context3.getContainerReports());
+    assertNull(context3.getNodeReport());
+    assertNotNull(context3.getPipelineReports());
+    assertEquals(StateContext.PIPELINE_REPORTS_PROTO_NAME,
+        context3.getPipelineReports().getDescriptorForType().getFullName());
+  }
+
+  private StateContext newStateContext(OzoneConfiguration conf,
+      DatanodeStateMachine datanodeStateMachineMock) {
+    StateContext stateContext = new StateContext(conf,
+        DatanodeStates.getInitState(), datanodeStateMachineMock);
+    InetSocketAddress scm1 = new InetSocketAddress("scm1", 9001);
+    stateContext.addEndpoint(scm1);
+    InetSocketAddress scm2 = new InetSocketAddress("scm2", 9001);
+    stateContext.addEndpoint(scm2);
+    return stateContext;
+  }
+
+  private GeneratedMessage newMockReport(String messageType) {
+    GeneratedMessage pipelineReports = mock(GeneratedMessage.class);
+    when(pipelineReports.getDescriptorForType()).thenReturn(
+        mock(Descriptor.class));
+    when(pipelineReports.getDescriptorForType().getFullName()).thenReturn(
+        messageType);
+    return pipelineReports;
+  }
+
   @Test
   public void testReportAPIs() {
     OzoneConfiguration conf = new OzoneConfiguration();
@@ -64,8 +338,14 @@
     InetSocketAddress scm1 = new InetSocketAddress("scm1", 9001);
     InetSocketAddress scm2 = new InetSocketAddress("scm2", 9001);
 
-    // Try to add report with endpoint. Should not be stored.
-    stateContext.addReport(mock(GeneratedMessage.class));
+    GeneratedMessage generatedMessage = mock(GeneratedMessage.class);
+    when(generatedMessage.getDescriptorForType()).thenReturn(
+        mock(Descriptor.class));
+    when(generatedMessage.getDescriptorForType().getFullName()).thenReturn(
+        "hadoop.hdds.CommandStatusReportsProto");
+
+    // Try to add report with zero endpoint. Should not be stored.
+    stateContext.addReport(generatedMessage);
     assertTrue(stateContext.getAllAvailableReports(scm1).isEmpty());
 
     // Add 2 scm endpoints.
@@ -73,7 +353,7 @@
     stateContext.addEndpoint(scm2);
 
     // Add report. Should be added to all endpoints.
-    stateContext.addReport(mock(GeneratedMessage.class));
+    stateContext.addReport(generatedMessage);
     List<GeneratedMessage> allAvailableReports =
         stateContext.getAllAvailableReports(scm1);
     assertEquals(1, allAvailableReports.size());
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCreatePipelineCommandHandler.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCreatePipelineCommandHandler.java
index febd1c3..d23f1c4 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCreatePipelineCommandHandler.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCreatePipelineCommandHandler.java
@@ -44,6 +44,7 @@
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.mockito.Mockito;
+import org.mockito.stubbing.Answer;
 import org.powermock.api.mockito.PowerMockito;
 import org.powermock.core.classloader.annotations.PrepareForTest;
 import org.powermock.modules.junit4.PowerMockRunner;
@@ -79,7 +80,10 @@
     Mockito.when(raftClient.getGroupManagementApi(
         Mockito.any(RaftPeerId.class))).thenReturn(raftClientGroupManager);
     PowerMockito.mockStatic(RaftClient.class);
-    PowerMockito.when(RaftClient.newBuilder()).thenReturn(builder);
+    // Work around for mockito bug:
+    // https://github.com/powermock/powermock/issues/992
+    PowerMockito.when(RaftClient.newBuilder()).thenAnswer(
+        (Answer<RaftClient.Builder>) invocation -> builder);
   }
 
   private RaftClient.Builder mockRaftClientBuilder() {
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
index 25d8b1d..4000e34 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
@@ -29,7 +29,6 @@
 import org.apache.hadoop.hdds.utils.db.Table;
 import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.container.common.helpers.BlockData;
-import org.apache.hadoop.ozone.container.common.helpers.ChunkInfo;
 import org.apache.hadoop.ozone.container.common.impl.ChunkLayOutVersion;
 import org.apache.hadoop.ozone.container.common.impl.ContainerDataYaml;
 import org.apache.hadoop.ozone.container.common.volume.HddsVolume;
@@ -59,15 +58,19 @@
 import java.io.FileInputStream;
 import java.io.FileOutputStream;
 import java.io.IOException;
-import java.util.ArrayList;
+import java.io.OutputStream;
 import java.util.HashMap;
 import java.util.Map;
 import java.util.List;
 import java.util.UUID;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
 
 import static org.apache.ratis.util.Preconditions.assertTrue;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.fail;
 import static org.mockito.ArgumentMatchers.anyList;
 import static org.mockito.ArgumentMatchers.anyLong;
@@ -125,36 +128,9 @@
     keyValueContainer = new KeyValueContainer(keyValueContainerData, CONF);
   }
 
-  private void addBlocks(int count) throws Exception {
-    long containerId = keyValueContainerData.getContainerID();
-
-    try(ReferenceCountedDB metadataStore = BlockUtils.getDB(keyValueContainer
-        .getContainerData(), CONF)) {
-      for (int i = 0; i < count; i++) {
-        // Creating BlockData
-        BlockID blockID = new BlockID(containerId, i);
-        BlockData blockData = new BlockData(blockID);
-        blockData.addMetadata(OzoneConsts.VOLUME, OzoneConsts.OZONE);
-        blockData.addMetadata(OzoneConsts.OWNER,
-            OzoneConsts.OZONE_SIMPLE_HDFS_USER);
-        List<ContainerProtos.ChunkInfo> chunkList = new ArrayList<>();
-        ChunkInfo info = new ChunkInfo(String.format("%d.data.%d", blockID
-            .getLocalID(), 0), 0, 1024);
-        chunkList.add(info.getProtoBufMessage());
-        blockData.setChunks(chunkList);
-        metadataStore.getStore().getBlockDataTable()
-                .put(Long.toString(blockID.getLocalID()), blockData);
-      }
-    }
-  }
-
   @Test
   public void testCreateContainer() throws Exception {
-
-    // Create Container.
-    keyValueContainer.create(volumeSet, volumeChoosingPolicy, scmId);
-
-    keyValueContainerData = keyValueContainer.getContainerData();
+    createContainer();
 
     String containerMetaDataPath = keyValueContainerData.getMetadataPath();
     String chunksPath = keyValueContainerData.getChunksPath();
@@ -171,38 +147,11 @@
 
   @Test
   public void testContainerImportExport() throws Exception {
-
     long containerId = keyValueContainer.getContainerData().getContainerID();
-    // Create Container.
-    keyValueContainer.create(volumeSet, volumeChoosingPolicy, scmId);
-
-
-    keyValueContainerData = keyValueContainer
-        .getContainerData();
-
-    keyValueContainerData.setState(
-        ContainerProtos.ContainerDataProto.State.CLOSED);
-
+    createContainer();
     long numberOfKeysToWrite = 12;
-    //write one few keys to check the key count after import
-    try(ReferenceCountedDB metadataStore =
-        BlockUtils.getDB(keyValueContainerData, CONF)) {
-      Table<String, BlockData> blockDataTable =
-              metadataStore.getStore().getBlockDataTable();
-
-      for (long i = 0; i < numberOfKeysToWrite; i++) {
-        blockDataTable.put("test" + i, new BlockData(new BlockID(i, i)));
-      }
-
-      // As now when we put blocks, we increment block count and update in DB.
-      // As for test, we are doing manually so adding key count to DB.
-      metadataStore.getStore().getMetadataTable()
-              .put(OzoneConsts.BLOCK_COUNT, numberOfKeysToWrite);
-    }
-
-    Map<String, String> metadata = new HashMap<>();
-    metadata.put("key1", "value1");
-    keyValueContainer.update(metadata, true);
+    closeContainer();
+    populate(numberOfKeysToWrite);
 
     //destination path
     File folderToExport = folder.newFile("exported.tar.gz");
@@ -261,6 +210,76 @@
 
   }
 
+  /**
+   * Create the container on disk.
+   */
+  private void createContainer() throws StorageContainerException {
+    keyValueContainer.create(volumeSet, volumeChoosingPolicy, scmId);
+    keyValueContainerData = keyValueContainer.getContainerData();
+  }
+
+  /**
+   * Add some keys to the container.
+   */
+  private void populate(long numberOfKeysToWrite) throws IOException {
+    try (ReferenceCountedDB metadataStore =
+        BlockUtils.getDB(keyValueContainer.getContainerData(), CONF)) {
+      Table<String, BlockData> blockDataTable =
+              metadataStore.getStore().getBlockDataTable();
+
+      for (long i = 0; i < numberOfKeysToWrite; i++) {
+        blockDataTable.put("test" + i, new BlockData(new BlockID(i, i)));
+      }
+
+      // As now when we put blocks, we increment block count and update in DB.
+      // As for test, we are doing manually so adding key count to DB.
+      metadataStore.getStore().getMetadataTable()
+              .put(OzoneConsts.BLOCK_COUNT, numberOfKeysToWrite);
+    }
+
+    Map<String, String> metadata = new HashMap<>();
+    metadata.put("key1", "value1");
+    keyValueContainer.update(metadata, true);
+  }
+
+  /**
+   * Set container state to CLOSED.
+   */
+  private void closeContainer() {
+    keyValueContainerData.setState(
+        ContainerProtos.ContainerDataProto.State.CLOSED);
+  }
+
+  @Test
+  public void concurrentExport() throws Exception {
+    createContainer();
+    populate(100);
+    closeContainer();
+
+    AtomicReference<String> failed = new AtomicReference<>();
+
+    TarContainerPacker packer = new TarContainerPacker();
+    List<Thread> threads = IntStream.range(0, 20)
+        .mapToObj(i -> new Thread(() -> {
+          try {
+            File file = folder.newFile("concurrent" + i + ".tar.gz");
+            try (OutputStream out = new FileOutputStream(file)) {
+              keyValueContainer.exportContainerData(out, packer);
+            }
+          } catch (Exception e) {
+            failed.compareAndSet(null, e.getMessage());
+          }
+        }))
+        .collect(Collectors.toList());
+
+    threads.forEach(Thread::start);
+    for (Thread thread : threads) {
+      thread.join();
+    }
+
+    assertNull(failed.get());
+  }
+
   @Test
   public void testDuplicateContainer() throws Exception {
     try {
@@ -293,8 +312,7 @@
 
   @Test
   public void testDeleteContainer() throws Exception {
-    keyValueContainerData.setState(ContainerProtos.ContainerDataProto.State
-        .CLOSED);
+    closeContainer();
     keyValueContainer = new KeyValueContainer(
         keyValueContainerData, CONF);
     keyValueContainer.create(volumeSet, volumeChoosingPolicy, scmId);
@@ -373,8 +391,7 @@
   @Test
   public void testUpdateContainerUnsupportedRequest() throws Exception {
     try {
-      keyValueContainerData.setState(
-          ContainerProtos.ContainerDataProto.State.CLOSED);
+      closeContainer();
       keyValueContainer = new KeyValueContainer(keyValueContainerData, CONF);
       keyValueContainer.create(volumeSet, volumeChoosingPolicy, scmId);
       Map<String, String> metadata = new HashMap<>();
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestTarContainerPacker.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestTarContainerPacker.java
index bee77c7..d248ac1 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestTarContainerPacker.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestTarContainerPacker.java
@@ -23,7 +23,7 @@
 import java.io.FileOutputStream;
 import java.io.FileWriter;
 import java.io.IOException;
-import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 import java.nio.file.Files;
 import java.nio.file.Path;
 import java.nio.file.Paths;
@@ -55,7 +55,6 @@
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.apache.commons.compress.compressors.CompressorStreamFactory.GZIP;
 
 /**
@@ -187,7 +186,7 @@
     //read the container descriptor only
     try (FileInputStream input = new FileInputStream(targetFile.toFile())) {
       String containerYaml = new String(packer.unpackContainerDescriptor(input),
-          Charset.forName(UTF_8.name()));
+          StandardCharsets.UTF_8);
       Assert.assertEquals(TEST_DESCRIPTOR_FILE_CONTENT, containerYaml);
     }
 
@@ -203,7 +202,7 @@
     try (FileInputStream input = new FileInputStream(targetFile.toFile())) {
       descriptor =
           new String(packer.unpackContainerData(destinationContainer, input),
-              Charset.forName(UTF_8.name()));
+              StandardCharsets.UTF_8);
     }
 
     assertExampleMetadataDbIsGood(
@@ -359,7 +358,7 @@
 
     try (FileInputStream testFile = new FileInputStream(dbFile.toFile())) {
       List<String> strings = IOUtils
-          .readLines(testFile, Charset.forName(UTF_8.name()));
+          .readLines(testFile, StandardCharsets.UTF_8);
       Assert.assertEquals(1, strings.size());
       Assert.assertEquals(TEST_DB_FILE_CONTENT, strings.get(0));
     }
@@ -377,7 +376,7 @@
 
     try (FileInputStream testFile = new FileInputStream(chunkFile.toFile())) {
       List<String> strings = IOUtils
-          .readLines(testFile, Charset.forName(UTF_8.name()));
+          .readLines(testFile, StandardCharsets.UTF_8);
       Assert.assertEquals(1, strings.size());
       Assert.assertEquals(TEST_CHUNK_FILE_CONTENT, strings.get(0));
     }
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/replication/ReplicationSupervisorScheduling.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/replication/ReplicationSupervisorScheduling.java
new file mode 100644
index 0000000..2c517cb
--- /dev/null
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/replication/ReplicationSupervisorScheduling.java
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.container.replication;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
+import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Helper to check scheduling efficiency.
+ * <p>
+ * This unit test is not enabled (doesn't start with Test) but can be used
+ * to validate changes manually.
+ */
+public class ReplicationSupervisorScheduling {
+
+  private final Random random = new Random();
+
+  @Test
+  public void test() throws InterruptedException {
+    List<DatanodeDetails> datanodes = new ArrayList<>();
+    datanodes.add(MockDatanodeDetails.randomDatanodeDetails());
+    datanodes.add(MockDatanodeDetails.randomDatanodeDetails());
+
+    //locks representing the limited resource of remote and local disks
+
+    //datanode -> disk -> lock object (remote resources)
+    Map<UUID, Map<Integer, Object>> volumeLocks = new HashMap<>();
+
+    //disk -> lock (local resources)
+    Map<Integer, Object> destinationLocks = new HashMap<>();
+
+    //init the locks
+    for (DatanodeDetails datanode : datanodes) {
+      volumeLocks.put(datanode.getUuid(), new HashMap<>());
+      for (int i = 0; i < 10; i++) {
+        volumeLocks.get(datanode.getUuid()).put(i, new Object());
+      }
+    }
+
+    for (int i = 0; i < 10; i++) {
+      destinationLocks.put(i, new Object());
+    }
+
+    ContainerSet cs = new ContainerSet();
+
+    ReplicationSupervisor rs = new ReplicationSupervisor(cs,
+
+        //simplified executor emulating the current sequential download +
+        //import.
+        task -> {
+
+          //download, limited by the number of source datanodes
+          final DatanodeDetails sourceDatanode =
+              task.getSources().get(random.nextInt(task.getSources().size()));
+
+          final Map<Integer, Object> volumes =
+              volumeLocks.get(sourceDatanode.getUuid());
+          synchronized (volumes.get(random.nextInt(volumes.size()))) {
+            System.out.println("Downloading " + task.getContainerId() + " from "
+                + sourceDatanode.getUuid());
+            try {
+              Thread.sleep(1000);
+            } catch (InterruptedException ex) {
+              ex.printStackTrace();
+            }
+          }
+
+          //import, limited by the destination datanode
+          final int volumeIndex = random.nextInt(destinationLocks.size());
+          synchronized (destinationLocks.get(volumeIndex)) {
+            System.out.println(
+                "Importing " + task.getContainerId() + " to disk "
+                    + volumeIndex);
+
+            try {
+              Thread.sleep(1000);
+            } catch (InterruptedException ex) {
+              ex.printStackTrace();
+            }
+          }
+
+        }, 10);
+
+    final long start = System.currentTimeMillis();
+
+    //schedule 100 container replication
+    for (int i = 0; i < 100; i++) {
+      List<DatanodeDetails> sources = new ArrayList<>();
+      sources.add(datanodes.get(random.nextInt(datanodes.size())));
+      rs.addTask(new ReplicationTask(i, sources));
+    }
+    rs.shutdownAfterFinish();
+    final long executionTime = System.currentTimeMillis() - start;
+    System.out.println(executionTime);
+    Assert.assertTrue("Execution was too slow : " + executionTime + " ms",
+        executionTime < 100_000);
+  }
+
+}
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/replication/TestSimpleContainerDownloader.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/replication/TestSimpleContainerDownloader.java
index f29b157..7070425 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/replication/TestSimpleContainerDownloader.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/replication/TestSimpleContainerDownloader.java
@@ -115,7 +115,7 @@
           @Override
           protected CompletableFuture<Path> downloadContainer(
               long containerId, DatanodeDetails datanode
-          ) throws Exception {
+          ) {
             //download is always successful.
             return CompletableFuture
                 .completedFuture(Paths.get(datanode.getUuidString()));
@@ -169,7 +169,7 @@
       protected CompletableFuture<Path> downloadContainer(
           long containerId,
           DatanodeDetails datanode
-      ) throws Exception {
+      ) {
 
         if (datanodes.contains(datanode)) {
           if (directException) {
diff --git a/hadoop-hdds/container-service/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker b/hadoop-hdds/container-service/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker
new file mode 100644
index 0000000..3c9e1c8
--- /dev/null
+++ b/hadoop-hdds/container-service/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker
@@ -0,0 +1,16 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+mock-maker-inline
\ No newline at end of file
diff --git a/hadoop-hdds/docs/README.md b/hadoop-hdds/docs/README.md
index 8d5cdb7..c5c9167 100644
--- a/hadoop-hdds/docs/README.md
+++ b/hadoop-hdds/docs/README.md
@@ -14,7 +14,7 @@
   See the License for the specific language governing permissions and
   limitations under the License.
 -->
-# Hadoop Ozone/HDDS docs
+# Apache Ozone/HDDS docs
 
 This subproject contains the inline documentation for Ozone/HDDS components.
 
diff --git a/hadoop-hdds/docs/config.yaml b/hadoop-hdds/docs/config.yaml
index d0c69c6..44af7c4 100644
--- a/hadoop-hdds/docs/config.yaml
+++ b/hadoop-hdds/docs/config.yaml
@@ -24,6 +24,8 @@
     languageName: 中文
     weight: 2
 title: "Ozone"
+params:
+  ghrepo: https://github.com/apache/ozone/
 theme: "ozonedoc"
 pygmentsCodeFences: true
 uglyurls: true
diff --git a/hadoop-hdds/docs/content/_index.md b/hadoop-hdds/docs/content/_index.md
index 7890f6f..be0b303 100644
--- a/hadoop-hdds/docs/content/_index.md
+++ b/hadoop-hdds/docs/content/_index.md
@@ -21,7 +21,7 @@
   limitations under the License.
 -->
 
-# Apache Hadoop Ozone
+# Apache Ozone
 
 {{<figure class="ozone-usage" src="/ozone-usage.png" width="60%">}}
 
diff --git a/hadoop-hdds/docs/content/_index.zh.md b/hadoop-hdds/docs/content/_index.zh.md
index 689490b..57011d1 100644
--- a/hadoop-hdds/docs/content/_index.zh.md
+++ b/hadoop-hdds/docs/content/_index.zh.md
@@ -20,7 +20,7 @@
   limitations under the License.
 -->
 
-# Apache Hadoop Ozone
+# Apache Ozone
 
 {{<figure src="/ozone-usage.png" width="60%">}}
 
@@ -29,7 +29,7 @@
 
 Apache Spark、Hive 和 YARN 等应用无需任何修改即可使用 Ozone。Ozone 提供了 [Java API]({{<
 ref "JavaApi.zh.md" >}})、[S3 接口]({{< ref "S3.zh.md" >}})和命令行接口,极大地方便了 Ozone
- 在不同应用场景下的的使用。
+ 在不同应用场景下的使用。
 
 Ozone 的管理由卷、桶和键组成:
 
diff --git a/hadoop-hdds/docs/content/concept/OzoneManager.zh.md b/hadoop-hdds/docs/content/concept/OzoneManager.zh.md
index 3fc7fbf..2767805 100644
--- a/hadoop-hdds/docs/content/concept/OzoneManager.zh.md
+++ b/hadoop-hdds/docs/content/concept/OzoneManager.zh.md
@@ -77,6 +77,8 @@
 
 为了详细地了解 Ozone Manager ,本节针对它所提供的网络服务和持久化状态提供一个快速概述。
 
+### Ozone Manager 提供的网络服务
+
 Ozone 为客户端和管理命令提供网络服务,主要的服务如下:
 
  * 键、桶、卷 / 增删改查
@@ -93,7 +95,7 @@
    * ServiceList(用于服务发现)
    * DBUpdates(用于 [Recon]({{< ref path="feature/Recon.md" lang="en" >}}) 下载快照)
  
- **持久化状态**
+### 持久化状态
 
 以下数据将保存在 Ozone Manager 端的指定 RocksDB 目录中:
 
diff --git a/hadoop-hdds/docs/content/concept/Recon.zh.md b/hadoop-hdds/docs/content/concept/Recon.zh.md
new file mode 100644
index 0000000..5c67351
--- /dev/null
+++ b/hadoop-hdds/docs/content/concept/Recon.zh.md
@@ -0,0 +1,116 @@
+---
+title: "Recon"
+date: "2020-10-27"
+weight: 8
+menu: 
+  main:
+     parent: 概念
+summary: Recon 作为 Ozone 的管理和监视控制台。
+---
+<!---
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+Recon 充当 Ozone 的管理和监视控制台。它提供了 Ozone 的鸟瞰图,并通过基于 REST 的 API 和丰富的网页用户界面(Web UI)展示了集群的当前状态,从而帮助用户解决任何问题。
+
+
+## 高层次设计
+
+{{<figure src="/concept/ReconHighLevelDesign.png" width="800px">}}
+
+<br/>
+
+在较高的层次上,Recon 收集和汇总来自 Ozone Manager(OM)、Storage Container Manager(SCM)和数据节点(DN)的元数据,并充当中央管理和监视控制台。Ozone 管理员可以使用 Recon 查询系统的当前状态,而不会使 OM  或 SCM 过载。
+
+Recon 维护多个数据库,以支持批处理,更快的查询和持久化聚合信息。它维护 OM DB 和 SCM DB 的本地副本,以及用于持久存储聚合信息的 SQL 数据库。
+
+Recon 还与 Prometheus 集成,提供一个 HTTP 端点来查询 Prometheus 的 Ozone 指标,并在网页用户界面(Web UI)中显示一些关键时间点的指标。
+
+## Recon 和 Ozone Manager
+
+{{<figure src="/concept/ReconOmDesign.png" width="800px">}}
+
+<br/>
+
+Recon 最初从领导者 OM 的 HTTP 端点获取 OM rocks DB 的完整快照,解压缩文件并初始化 RocksDB 以进行本地查询。通过对最后一个应用的序列 ID 的 RPC 调用,定期请求领导者 OM 进行增量更新,从而使数据库保持同步。如果由于某种原因而无法检索增量更新或将其应用于本地数据库,则会再次请求一个完整快照以使本地数据库与 OM DB 保持同步。因此,Recon 可能会显示陈旧的信息,因为本地数据库不会总是同步的。
+
+## Recon 和 Storage Container Manager
+
+{{<figure src="/concept/ReconScmDesign.png" width="800px">}}
+
+<br/>
+
+Recon 还充当数据节点的被动 SCM。在集群中配置 Recon 时,所有数据节点都向 Recon 注册,并像 SCM 一样向 Recon 发送心跳、容器报告、增量容器报告等。Recon 使用它从数据节点得到的所有信息在本地构建自己的 SCM rocks DB 副本。Recon 从不向数据节点发送任何命令作为响应,而只是充当被动 SCM 以更快地查找 SCM 元数据。
+
+## <a name="task-framework"></a> 任务框架
+
+Recon 有其自己的任务框架,可对从 OM 和 SCM 获得的数据进行批处理。一个任务可以在 OM DB 或 SCM DB 上监听和操作数据库事件,如`PUT`、`DELETE`、`UPDATE`等。在此基础上,任务实现`org.apache.hadoop.ozone.recon.tasks.ReconOmTask`或者扩展`org.apache.hadoop.ozone.recon.scm.ReconScmTask`。
+
+`ReconOmTask`的一个示例是`ContainerKeyMapperTask`,它在 RocksDB 中持久化保留了容器 -> 键映射。当容器被报告丢失或处于不健康的运行状态时,这有助于了解哪些键是容器的一部分。另一个示例是`FileSizeCountTask`,它跟踪 SQL 数据库中给定文件大小范围内的文件计数。这些任务有两种情况的实现:
+ 
+ - 完整快照(reprocess())
+ - 增量更新(process())
+ 
+当从领导者 OM 获得 OM DB 的完整快照时,将对所有注册的 OM 任务调用 reprocess()。在随后的增量更新中,将在这些 OM 任务上调用 process()。
+
+`ReconScmTask`的示例是`ContainerHealthTask`,它以可配置的时间间隔运行,扫描所有容器的列表,并将不健康容器的状态(`MISSING`、`MIS_REPLICATED`、`UNDER_REPLICATED`、`OVER_REPLICATED`)持久化保留在 SQL 表中。此信息用于确定集群中是否有丢失的容器。
+
+## Recon 和 Prometheus
+
+Recon 可以与任何配置为收集指标的 Prometheus 实例集成,并且可以在数据节点和 Pipelines 页面的 Recon UI 中显示有用的信息。Recon 还公开了一个代理端点 ([/指标]({{< ref path="interface/ReconApi.zh.md#metrics" >}})) 来查询 Prometheus。可以通过将此配置`ozone.recon.prometheus.http.endpoint`设置为 Prometheus 端点如`ozone.recon.prometheus.http.endpoint=localhost:9090`来启用此集成。
+
+## API 参考
+
+[链接到完整的 API 参考]({{< ref path="interface/ReconApi.zh.md" >}})
+   
+## 持久化状态
+
+ * [OM database]({{< ref "concept/OzoneManager.zh.md#持久化状态" >}})的本地副本
+ * [SCM database]({{< ref "concept/StorageContainerManager.zh.md#持久化状态" >}})的本地副本
+ * 以下数据在 Recon 中持久化在指定的 RocksDB 目录下: 
+     * ContainerKey 表
+         * 存储映射(容器,键) -> 计数
+     * ContainerKeyCount 表
+         * 存储容器 ID  -> 容器内的键数
+ * 以下数据存储在已配置的 SQL 数据库中(默认为 Derby ):
+     * GlobalStats 表
+         * 一个键 -> Value table 用于存储集群中出现的卷/桶/键的总数等聚合信息
+     * FileCountBySize 表
+         * 跟踪集群中文件大小范围内的文件数量
+     * ReconTaskStatus 表
+         * 跟踪在[Recon 任务框架](#task-framework)中已注册的 OM 和 SCM DB 任务的状态和最后运行时间戳
+     * ContainerHistory 表
+         * 存储容器副本 -> 具有最新已知时间戳记的数据节点映射。当一个容器被报告丢失时,它被用来确定最后已知的数据节点。
+     * UnhealthyContainers 表
+         * 随时跟踪集群中所有不健康的容器(MISSING、UNDER_REPLICATED、OVER_REPLICATED、MIS_REPLICATED)
+
+
+## 需要关注的配置项
+
+配置项 |默认值 | <div style="width:300px;">描述</div>
+----|---------|------------
+ozone.recon.http-address | 0.0.0.0:9888 | Recon web UI 监听的地址和基本端口。
+ozone.recon.address | 0.0.0.0:9891 | Recon 的 RPC 地址。
+ozone.recon.db.dir | none | Recon Server 存储其元数据的目录。
+ozone.recon.om.db.dir | none | Recon Server 存储其 OM 快照 DB 的目录。
+ozone.recon.om.snapshot<br>.task.interval.delay | 10m | Recon 以分钟间隔请求 OM DB 快照。
+ozone.recon.task<br>.missingcontainer.interval | 300s | 定期检查集群中不健康容器的时间间隔。
+ozone.recon.sql.db.jooq.dialect | DERBY | 请参考 [SQL 方言](https://www.jooq.org/javadoc/latest/org.jooq/org/jooq/SQLDialect.html) 来指定不同的方言。
+ozone.recon.sql.db.jdbc.url | jdbc:derby:${ozone.recon.db.dir}<br>/ozone_recon_derby.db | Recon SQL database 的 jdbc url。
+ozone.recon.sql.db.username | none | Recon SQL数据库的用户名。
+ozone.recon.sql.db.password | none | Recon SQL数据库的密码。
+ozone.recon.sql.db.driver | org.apache.derby.jdbc<br>.EmbeddedDriver | Recon SQL数据库的 jdbc driver。
+
diff --git a/hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md b/hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md
index 1c63f1b..7adecde 100644
--- a/hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md
+++ b/hadoop-hdds/docs/content/concept/StorageContainerManager.zh.md
@@ -44,7 +44,7 @@
 
 针对 Storage Container Manager 的详细视图,本节提供有关网络服务和持久化数据的快速概述。
 
-**Storage Container Manager 提供的网络服务:**
+### Storage Container Manager 提供的网络服务:
 
  * 管道: 列出/删除/激活/停用
     * 管道是形成一组复制组的数据节点
@@ -62,7 +62,7 @@
    
  注意:客户端不能直接连接 SCM 。
  
-**持久化状态**
+### 持久化状态
  
  以下数据持久化在 Storage Container Manager 端的指定 RocksDB 目录中
  
@@ -83,4 +83,4 @@
 ozone.scm.block.size | 256MB |  数据块的默认大小
 hdds.scm.safemode.min.datanode | 1 | 能够启动实际工作所需的最小数据节点数
 ozone.scm.http-address | 0.0.0.0:9876 | SCM 服务端使用的 HTTP 地址
-ozone.metadata.dirs | none | 存储持久化数据的目录(RocksDB)
\ No newline at end of file
+ozone.metadata.dirs | none | 存储持久化数据的目录(RocksDB)
diff --git a/hadoop-hdds/docs/content/design/decommissioning.md b/hadoop-hdds/docs/content/design/decommissioning.md
index 6c8e08e..e4abdf6 100644
--- a/hadoop-hdds/docs/content/design/decommissioning.md
+++ b/hadoop-hdds/docs/content/design/decommissioning.md
@@ -36,7 +36,7 @@
  * The progress of the decommissioning should be trackable
  * The nodes under decommissioning / maintenance mode should not been used for new pipelines / containers
  * The state of the datanodes should be persisted / replicated by the SCM (in HDFS the decommissioning info exclude/include lists are replicated manually by the admin). If datanode is marked for decommissioning this state be available after SCM and/or Datanode restarts.
- * We need to support validations before decommissioing (but the violations can be ignored by the admin).
+ * We need to support validations before decommissioning (but the violations can be ignored by the admin).
  * The administrator should be notified when a node can be turned off.
  * The maintenance mode can be time constrained: if the node marked for maintenance for 1 week and the node is not up after one week, the containers should be considered as lost (DEAD node) and should be replicated.
 
@@ -128,7 +128,7 @@
   STALE                | Some heartbeats were missing for an already missing nodes.
   DEAD                 | The stale node has not been recovered.
   ENTERING_MAINTENANCE | The in-progress state, scheduling is disabled but the node can't not been turned off due to in-progress replication.
-  IN_MAINTENANCE       | Node can be turned off but we expecteed to get it back and have all the replicas.
+  IN_MAINTENANCE       | Node can be turned off but we expected to get it back and have all the replicas.
   DECOMMISSIONING      | The in-progress state, scheduling is disabled, all the containers should be replicated to other nodes.
   DECOMMISSIONED       | The node can be turned off, all the containers are replicated to other machine
 
@@ -148,7 +148,7 @@
 
    * Container is closed.
    * We have at least one HEALTHY copy at all times.
-   * For entering DECOMMISSIONED mode `maintenance + healthy` must equal to `expectedeCount`
+   * For entering DECOMMISSIONED mode `maintenance + healthy` must equal to `expectedCount`
 
  5. We will update the node state to DECOMMISSIONED or IN_MAINTENANCE reached state.
 
@@ -186,7 +186,7 @@
 
 In case the _Replica count_ is positive, it means that we need to make more replicas. If the number is negative, it means that we are over replicated and we need to remove some replicas of this container. If the Replica count for a container is zero; it means that we have the expected number of containers in the cluster.
 
-To support idempontent placement strategies we should substract the in-fligt replications from the result: If there are one in-flight replication process and two replicas we won't start a new replication command unless the original command is timed out. The timeout is configured with `hdds.scm.replication.event.timeout` and the default value is 10 minutes.
+To support idempotent placement strategies we should subtract the in-flight replications from the result: If there are one in-flight replication process and two replicas we won't start a new replication command unless the original command is timed out. The timeout is configured with `hdds.scm.replication.event.timeout` and the default value is 10 minutes.
 
 More preciously the current algorithm is the following:
 
@@ -249,7 +249,7 @@
 **From DECOMMISSIONING to DECOMMISSIONED**:
 
  * There are at least one healthy replica
- * We have three replicas (both helthy and maintenance)
+ * We have three replicas (both healthy and maintenance)
 
 Which means that our stop condition can be formalized as:
 
diff --git a/hadoop-hdds/docs/content/feature/Quota.md b/hadoop-hdds/docs/content/feature/Quota.md
index 933bbb5..ecab238 100644
--- a/hadoop-hdds/docs/content/feature/Quota.md
+++ b/hadoop-hdds/docs/content/feature/Quota.md
@@ -32,11 +32,32 @@
 1. Storage Space level quota
 
 Administrators should be able to define how much storage space a Volume or Bucket can use. The following Settings for Storage space quota are currently supported:
+
 a. By default, the quota for volume and bucket is not enabled.
-b. When volume quota is enabled, the total size of bucket quota cannot exceed volume.
+
+b. When volume quota is enabled, the total quota of the buckets, cannot exceed the volume quota.
+
 c. Bucket quota can be set separately without enabling Volume quota. The size of bucket quota is unrestricted at this point.
+
 d. Volume quota is not currently supported separately, and volume quota takes effect only if bucket quota is set. Because ozone only check the usedBytes of the bucket when we write the key.
 
+e. If the cluster is upgraded from old version less than 1.1.0, use of quota on older volumes and buckets(We can confirm by looking at the info for the volume or bucket, and if the quota value is -2 the volume or bucket is old) is not recommended. Since the old key is not counted to the bucket's usedBytes, the quota setting is inaccurate at this point.
+
+f. If volume's quota is enabled then bucket's quota cannot be cleared. 
+
+2. Namespace quota
+
+Administrators should be able to define how many namespace a Volume or Bucket can use. The following settings for namespace quota are supported: 
+
+a. By default, the namespace quota for volume and bucket is not enabled (thus unlimited quota).
+
+b. When volume namespace quota is enabled, the total number of buckets under the volume, cannot exceed the volume namespace quota.
+
+c. When bucket namespace quota is enabled, the total number of keys under the bucket, cannot exceed the bucket namespace quota.
+
+d. Linked buckets do not consume namespace quota.
+
+e. If the cluster is upgraded from old version less than 1.1.0, use of quota on older volumes and buckets(We can confirm by looking at the info for the volume or bucket, and if the quota value is -2 then volume or bucket is old) is not recommended. Since the old key is not counted to the bucket's namespace quota, the quota setting is inaccurate at this point.
 
 ## Client usage
 ### Storage Space level quota
@@ -66,9 +87,10 @@
 
 Total bucket quota should not be greater than its Volume quota. If we have a 10MB Volume, The sum of the sizes of all buckets under this volume cannot exceed 10MB, otherwise the bucket set quota fails.
 
-#### Clear the quota for Volume1. The Bucket cleanup command is similar.
+#### Clear the quota for volume and bucket
 ```shell
 bin/ozone sh volume clrquota --space-quota /volume1
+bin/ozone sh bucket clrquota --space-quota /volume1/bucket1
 ```
 
 #### Check quota and usedBytes for volume and bucket
@@ -76,4 +98,42 @@
 bin/ozone sh volume info /volume1
 bin/ozone sh bucket info /volume1/bucket1
 ```
-We can get the quota value and usedBytes in the info of volume and bucket.
\ No newline at end of file
+We can get the quota value and usedBytes in the info of volume and bucket.
+
+### Namespace quota
+Namespace quota is a number that represents how many unique names can be used. This number cannot be greater than LONG.MAX_VALUE in Java.
+
+#### Volume Namespace quota
+```shell
+bin/ozone sh volume create --namespace-quota 100 /volume1
+```
+This means setting the namespace quota of Volume1 to 100.
+
+```shell
+bin/ozone sh volume setquota --namespace-quota 1000 /volume1
+```
+This behavior changes the namespace quota of Volume1 to 1000.
+
+#### Bucket Namespace quota
+```shell
+bin/ozone sh bucket create --namespace-quota 100 /volume1/bucket1
+```
+That means bucket1 allows us to use 100 of namespace.
+
+```shell
+bin/ozone sh bucket setquota --namespace-quota 1000 /volume1/bucket1 
+```
+This behavior changes the quota for Bucket1 to 1000.
+
+#### Clear the quota for volume and bucket
+```shell
+bin/ozone sh volume clrquota --namespace-quota /volume1
+bin/ozone sh bucket clrquota --namespace-quota /volume1/bucket1
+```
+
+#### Check quota and usedNamespace for volume and bucket
+```shell
+bin/ozone sh volume info /volume1
+bin/ozone sh bucket info /volume1/bucket1
+```
+We can get the quota value and usedNamespace in the info of volume and bucket.
\ No newline at end of file
diff --git a/hadoop-hdds/docs/content/feature/Quota.zh.md b/hadoop-hdds/docs/content/feature/Quota.zh.md
index b3f0c3c..eb6e084 100644
--- a/hadoop-hdds/docs/content/feature/Quota.zh.md
+++ b/hadoop-hdds/docs/content/feature/Quota.zh.md
@@ -30,10 +30,32 @@
 1. Storage space级别配额
 
  管理员应该能够定义一个Volume或Bucket可以使用多少存储空间。目前支持以下storage space quota的设置:
+ 
  a. 默认情况下volume和bucket的quota不启用。
+ 
  b. 当volume quota启用时,bucket quota的总大小不能超过volume。
+ 
  c. 可以在不启用volume quota的情况下单独给bucket设置quota。此时bucket quota的大小是不受限制的。
+ 
  d. 目前不支持单独设置volume quota,只有在设置了bucket quota的情况下volume quota才会生效。因为ozone在写入key时只检查bucket的usedBytes。
+ 
+ e. 如果集群从小于1.1.0的旧版本升级而来,则不建议在旧volume和bucket(可以通过查看volume或者bucket的info确认,如果quota值是-2,那么这个volume或者bucket就是旧的)上使用配额。由于旧的key没有计算到bucket的usedBytes中,所以此时配额设置是不准确的。
+ 
+ f. 如果volume quota被启用,那么bucket quota将不能被清除。
+
+2. 命名空间配额
+
+ 管理员应当能够定义一个Volume或Bucket可以使用多少命名空间。目前支持命名空间的配额设置为:
+
+ a. 默认情况下volume和bucket的命名空间配额不启用(即无限配额)。
+
+ b. 当volume命名空间配额启用时,该volume的bucket数目不能超过此配额。
+
+ c. 当bucket的命名空间配额启用时,该bucket的key数目不能超过此配额。
+
+ d. Linked bucket不消耗命名空间配额。
+
+ e. 如果集群从小于1.1.0的旧版本升级而来,则不建议在旧volume和bucket(可以通过查看volume或者bucket的info确认,如果quota值是-2,那么这个volume或者bucket就是旧的)上使用配额。由于旧的key没有计算到bucket的命名空间配额中,所以此时配额设置是不准确的。
 
 ## 客户端用法
 ### Storage space级别配额
@@ -62,9 +84,10 @@
 
 bucket的总配额 不应大于其Volume的配额。让我们看一个例子,如果我们有一个10MB的Volume,该volume下所有bucket的大小之和不能超过10MB,否则设置bucket quota将失败。
 
-#### 清除Volume1的配额, Bucket清除命令与此类似
+#### 清除volume和bucket的配额
 ```shell
 bin/ozone sh volume clrquota --space-quota /volume1
+bin/ozone sh bucket clrquota --space-quota /volume1/bucket1
 ```
 #### 查看volume和bucket的quota值以及usedBytes
 ```shell
@@ -72,3 +95,40 @@
 bin/ozone sh bucket info /volume1/bucket1
 ```
 我们能够在volume和bucket的info中查看quota及usedBytes的值
+
+### Namespace quota
+命名空间配额是一个数字,其代表由多少个名字能够使用。这个数字不能超过Java long数据类型的最大值。
+
+#### Volume Namespace quota
+```shell
+bin/ozone sh volume create --namespace-quota 100 /volume1
+```
+这意味着将volume1的命名空间配额设置为100。
+
+```shell
+bin/ozone sh volume setquota --namespace-quota 1000 /volume1
+```
+此行为将volume1的命名空间配额更改为1000。
+
+#### Bucket Namespace quota
+```shell
+bin/ozone sh bucket create --namespace-quota 100 /volume1/bucket1
+```
+这意味着bucket1允许我们使用100的命名空间。
+
+```shell
+bin/ozone sh bucket setquota --namespace-quota 1000 /volume1/bucket1 
+```
+该行为将bucket1的命名空间配额更改为1000。
+
+#### 清除volume和bucket的配额
+```shell
+bin/ozone sh volume clrquota --namespace-quota /volume1
+bin/ozone sh bucket clrquota --namespace-quota /volume1/bucket1
+```
+#### 查看volume和bucket的quota值以及usedNamespace
+```shell
+bin/ozone sh volume info /volume1
+bin/ozone sh bucket info /volume1/bucket1
+```
+我们能够在volume和bucket的info中查看quota及usedNamespace的值
diff --git a/hadoop-hdds/docs/content/feature/Recon.zh.md b/hadoop-hdds/docs/content/feature/Recon.zh.md
index 5a41620..b7d04a7 100644
--- a/hadoop-hdds/docs/content/feature/Recon.zh.md
+++ b/hadoop-hdds/docs/content/feature/Recon.zh.md
@@ -1,5 +1,5 @@
 ---
-title: "Recon"
+title: "Recon 服务器"
 weight: 7
 menu:
    main:
@@ -23,27 +23,10 @@
   limitations under the License.
 -->
 
-Recon 是 Ozone 中用于分析服务的网页用户界面(Web UI)。它是一个可选组件,但强烈建议您使用,因为它可以增加可视性。
+Recon 作为 Ozone 的管理和监听控制台。它是一个可选组件,但强烈建议将其添加到集群中,因为 Recon 可以在关键时刻帮助您对集群进行故障排除。请参阅 [Recon 架构]({{< ref "concept/Recon.zh.md" >}}) 以获得详细的架构概述和 [Recon API]({{< ref path="interface/ReconApi.zh.md" >}}) 文档,以获得 HTTP API 参考。
 
-Recon 从 Ozone 集群中**收集**所有数据,并将其存储在 SQL数据库中,以便进一步分析。
-
- 1. Ozone Manager 的数据是通过异步过程在后台下载的。OM 会定期创建 RocksDB 快照,并将增量数据复制到 Recon 进行处理。
-
- 2. 数据节点不仅可以将心跳发送到 SCM,也能发送到 Recon。Recon 可以成为心跳的唯读(Read-only)监听器,并根据收到的信息更新本地数据库。
-
-当 Recon 配置完成时,我们便可以启动服务。
+Recon 是一个自带 HTTP 网页服务器的服务,可以通过以下命令启动。
 
 {{< highlight bash >}}
 ozone --daemon start recon
 {{< /highlight >}}
-
-## 需要关注的配置项
-
-配置项 | 默认值 | 描述
--------|--------|-----
-ozone.recon.http-address | 0.0.0.0:9888 | Recon web UI 监听的地址和基本端口。
-ozone.recon.address | 0.0.0.0:9891 | Recon 的 RPC 地址。
-ozone.recon.db.dir | none | Recon Server 存储其元数据的目录。
-ozone.recon.om.db.dir | none | Recon Server 存储其 OM 快照 DB 的目录。
-ozone.recon.om.snapshot.task.interval.delay | 10m | Recon 以分钟间隔请求 OM DB 快照。
-
diff --git a/hadoop-hdds/docs/content/interface/ReconApi.zh.md b/hadoop-hdds/docs/content/interface/ReconApi.zh.md
new file mode 100644
index 0000000..a134fd4
--- /dev/null
+++ b/hadoop-hdds/docs/content/interface/ReconApi.zh.md
@@ -0,0 +1,502 @@
+---
+title: Recon API
+weight: 4
+menu:
+   main:
+      parent: "编程接口"
+summary: Recon 服务器支持 HTTP 端点,以帮助故障排除和监听 Ozone 集群。
+---
+
+<!---
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+Recon API v1 是一组 HTTP 端点,可以帮助您了解 Ozone 集群的当前状态,并在需要时进行故障排除。
+
+### HTTP 端点
+
+#### 容器
+
+* **/containers**
+
+    **URL 结构**
+    ```
+    GET /api/v1/containers
+    ```
+
+    **参数**
+
+    * prevKey (可选)
+    
+        只回传ID大于给定的 prevKey 的容器。
+        示例:prevKey=1
+
+    * limit (可选)
+    
+        只回传有限数量的结果。默认限制是1000。
+    
+    **回传**
+    
+    回传所有 ContainerMetadata 对象。
+    
+    ```json
+    {
+      "data": {
+        "totalCount": 3,
+        "containers": [
+          {
+            "ContainerID": 1,
+            "NumberOfKeys": 834
+          },
+          {
+            "ContainerID": 2,
+            "NumberOfKeys": 833
+          },
+          {
+            "ContainerID": 3,
+            "NumberOfKeys": 833
+          }
+        ]
+      }
+    }
+    ```
+
+* **/containers/:id/keys**
+
+    **URL 结构**
+    ```
+    GET /api/v1/containers/:id/keys
+    ```
+    
+    **参数**
+    
+    * prevKey (可选)
+     
+        只回传在给定的 prevKey 键前缀之后的键。
+        示例:prevKey=/vol1/bucket1/key1
+        
+    * limit (可选)
+    
+        只回传有限数量的结果。默认限制是1000。
+        
+    **回传**
+    
+    回传给定容器 ID 的所有 KeyMetadata 对象。
+    
+    ```json
+    {
+      "totalCount":7,
+      "keys": [
+        {
+          "Volume":"vol-1-73141",
+          "Bucket":"bucket-3-35816",
+          "Key":"key-0-43637",
+          "DataSize":1000,
+          "Versions":[0],
+          "Blocks": {
+            "0": [
+              {
+                "containerID":1,
+                "localID":105232659753992201
+              }
+            ]
+          },
+          "CreationTime":"2020-11-18T18:09:17.722Z",
+          "ModificationTime":"2020-11-18T18:09:30.405Z"
+        },
+        ...
+      ]
+    }
+    ```
+* **/containers/missing**
+    
+    **URL 结构**
+    ```
+    GET /api/v1/containers/missing
+    ```
+    
+    **参数**
+    
+    没有参数。
+    
+    **回传**
+    
+    回传所有丢失容器的 MissingContainerMetadata 对象。
+    
+    ```json
+    {
+    	"totalCount": 26,
+    	"containers": [{
+    		"containerID": 1,
+    		"missingSince": 1605731029145,
+    		"keys": 7,
+    		"pipelineID": "88646d32-a1aa-4e1a",
+    		"replicas": [{
+    			"containerId": 1,
+    			"datanodeHost": "localhost-1",
+    			"firstReportTimestamp": 1605724047057,
+    			"lastReportTimestamp": 1605731201301
+    		}, 
+            ...
+            ]
+    	},
+        ...
+        ]
+    }
+    ```
+* **/containers/:id/replicaHistory**
+
+    **URL 结构**
+    ```
+    GET /api/v1/containers/:id/replicaHistory
+    ```
+    
+    **参数**
+    
+    没有参数。
+    
+    **回传**
+
+    回传给定容器 ID 的所有 ContainerHistory 对象。
+    
+    ```json
+    [
+      {
+        "containerId": 1,
+        "datanodeHost": "localhost-1",
+        "firstReportTimestamp": 1605724047057,
+        "lastReportTimestamp": 1605730421294
+      },
+      ...
+    ]
+    ```
+* **/containers/unhealthy**
+
+    **URL 结构**
+     ```
+     GET /api/v1/containers/unhealthy
+     ```
+     
+    **参数**
+    
+    * batchNum (可选)
+
+        回传结果的批号(如“页码”)。
+        传递1,将回传记录1到limit。传递2,将回传limit + 1到2 * limit,依此类推。
+        
+    * limit (可选)
+    
+        只回传有限数量的结果。默认限制是1000。
+        
+    **回传**
+    
+    回传所有不健康容器的 UnhealthyContainerMetadata 对象。
+    
+     ```json
+     {
+     	"missingCount": 2,
+     	"underReplicatedCount": 0,
+     	"overReplicatedCount": 0,
+     	"misReplicatedCount": 0,
+     	"containers": [{
+     		"containerID": 1,
+     		"containerState": "MISSING",
+     		"unhealthySince": 1605731029145,
+     		"expectedReplicaCount": 3,
+     		"actualReplicaCount": 0,
+     		"replicaDeltaCount": 3,
+     		"reason": null,
+     		"keys": 7,
+     		"pipelineID": "88646d32-a1aa-4e1a",
+     		"replicas": [{
+     			"containerId": 1,
+     			"datanodeHost": "localhost-1",
+     			"firstReportTimestamp": 1605722960125,
+     			"lastReportTimestamp": 1605731230509
+     		}, 
+            ...
+            ]
+     	},
+        ...
+        ]
+     } 
+     ```
+     
+* **/containers/unhealthy/:state**
+
+    **URL 结构**
+    ```
+    GET /api/v1/containers/unhealthy/:state
+    ```
+     
+    **参数**
+    
+    * batchNum (可选)
+    
+        回传结果的批号(如“页码”)。
+        传递1,将回传记录1到limit。传递2,将回传limit + 1到2 * limit,依此类推。
+        
+    * limit (可选)
+    
+        只回传有限数量的结果。默认限制是1000。
+        
+    **回传**
+    
+    回传处于给定状态的容器的 UnhealthyContainerMetadata 对象。
+    不健康的容器状态可能为`MISSING`, `MIS_REPLICATED`, `UNDER_REPLICATED`, `OVER_REPLICATED`。
+    响应结构与`/containers/unhealthy`相同。
+    
+#### 集群状态
+
+* **/clusterState**
+
+    **URL 结构**
+    ```
+    GET /api/v1/clusterState
+    ```
+     
+    **参数**
+    
+    没有参数。
+    
+    **回传**
+    
+    返回 Ozone 集群当前状态的摘要。
+    
+     ```json
+     {
+     	"pipelines": 5,
+     	"totalDatanodes": 4,
+     	"healthyDatanodes": 4,
+     	"storageReport": {
+     		"capacity": 1081719668736,
+     		"used": 1309212672,
+     		"remaining": 597361258496
+     	},
+     	"containers": 26,
+     	"volumes": 6,
+     	"buckets": 26,
+     	"keys": 25
+     }
+     ```
+     
+#### 数据节点
+
+* **/datanodes**
+
+    **URL 结构**
+    ```
+    GET /api/v1/datanodes
+    ```
+    
+    **参数**
+    
+    没有参数。
+    
+    **回传**
+    
+    回传集群中的所有数据节点。
+    
+    ```json
+    {
+     	"totalCount": 4,
+     	"datanodes": [{
+     		"uuid": "f8f8cb45-3ab2-4123",
+     		"hostname": "localhost-1",
+     		"state": "HEALTHY",
+     		"lastHeartbeat": 1605738400544,
+     		"storageReport": {
+     			"capacity": 270429917184,
+     			"used": 358805504,
+     			"remaining": 119648149504
+     		},
+     		"pipelines": [{
+     			"pipelineID": "b9415b20-b9bd-4225",
+     			"replicationType": "RATIS",
+     			"replicationFactor": 3,
+     			"leaderNode": "localhost-2"
+     		}, {
+     			"pipelineID": "3bf4a9e9-69cc-4d20",
+     			"replicationType": "RATIS",
+     			"replicationFactor": 1,
+     			"leaderNode": "localhost-1"
+     		}],
+     		"containers": 17,
+     		"leaderCount": 1
+     	},
+        ...
+        ]
+     }
+     ```
+     
+#### 管道
+
+* **/pipelines**
+
+    **URL 结构**
+    ```
+    GET /api/v1/pipelines
+    ```
+    **参数**
+    
+    没有参数
+    
+    **回传**
+    
+    回传在集群中的所有管道。
+    
+    ```json
+     {
+     	"totalCount": 5,
+     	"pipelines": [{
+     		"pipelineId": "b9415b20-b9bd-4225",
+     		"status": "OPEN",
+     		"leaderNode": "localhost-1",
+     		"datanodes": ["localhost-1", "localhost-2", "localhost-3"],
+     		"lastLeaderElection": 0,
+     		"duration": 23166128,
+     		"leaderElections": 0,
+     		"replicationType": "RATIS",
+     		"replicationFactor": 3,
+     		"containers": 0
+     	},
+        ...
+        ]
+     }
+     ```  
+
+#### 任务
+
+* **/task/status**
+
+    **URL 结构**
+    ```
+    GET /api/v1/task/status
+    ```
+    
+    **参数**
+    
+    没有参数
+    
+    **回传**
+    
+    回传所有 Recon 任务的状态。
+  
+    ```json
+     [
+       {
+     	"taskName": "OmDeltaRequest",
+     	"lastUpdatedTimestamp": 1605724099147,
+     	"lastUpdatedSeqNumber": 186
+       },
+       ...
+     ]
+    ```
+    
+#### 使用率
+
+* **/utilization/fileCount**
+
+    **URL 结构**
+    ```
+    GET /api/v1/utilization/fileCount
+    ```
+    
+    **参数**
+    
+    * volume (可选)
+    
+        根据给定的卷名过滤结果。
+        
+    * bucket (可选)
+    
+        根据给定的桶名过滤结果。
+        
+    * fileSize (可选)
+
+        根据给定的文件大小筛选结果。
+        
+    **回传**
+    
+    回传不同文件范围内的文件计数,其中响应对象中的`fileSize`是文件大小范围的上限。
+    
+    ```json
+     [{
+     	"volume": "vol-2-04168",
+     	"bucket": "bucket-0-11685",
+     	"fileSize": 1024,
+     	"count": 1
+     }, {
+     	"volume": "vol-2-04168",
+     	"bucket": "bucket-1-41795",
+     	"fileSize": 1024,
+     	"count": 1
+     }, {
+     	"volume": "vol-2-04168",
+     	"bucket": "bucket-2-93377",
+     	"fileSize": 1024,
+     	"count": 1
+     }, {
+     	"volume": "vol-2-04168",
+     	"bucket": "bucket-3-50336",
+     	"fileSize": 1024,
+     	"count": 2
+     }]
+    ```
+    
+#### <a name="metrics"></a> 指标
+
+* **/metrics/:api**
+
+    **URL 结构**
+    ```
+    GET /api/v1/metrics/:api
+    ```
+    
+    **参数**
+
+    请参阅 [Prometheus HTTP API 参考](https://prometheus.io/docs/prometheus/latest/querying/api/) 以获取完整的查询文档。
+
+    **回传**
+
+    这是 Prometheus 的代理端点,并回传与 Prometheus 端点相同的响应。
+    示例:/api/v1/metrics/query?query=ratis_leader_election_electionCount
+    
+     ```json
+     {
+       "status": "success",
+       "data": {
+         "resultType": "vector",
+         "result": [
+           {
+             "metric": {
+               "__name__": "ratis_leader_election_electionCount",
+               "exported_instance": "33a5ac1d-8c65-4c74-a0b8-9314dfcccb42",
+               "group": "group-03CA9397D54B",
+               "instance": "ozone_datanode_1:9882",
+               "job": "ozone"
+             },
+             "value": [
+               1599159384.455,
+               "5"
+             ]
+           }
+         ]
+       }
+     }
+     ```
+
+
diff --git a/hadoop-hdds/docs/content/interface/S3.md b/hadoop-hdds/docs/content/interface/S3.md
index 3404cb8..6511642 100644
--- a/hadoop-hdds/docs/content/interface/S3.md
+++ b/hadoop-hdds/docs/content/interface/S3.md
@@ -120,10 +120,10 @@
 To make any other buckets available with the S3 interface a "symbolic linked" bucket can be created:
 
 ```bash
-ozone sh create volume /s3v
-ozone sh create volume /vol1
+ozone sh volume create /s3v
+ozone sh volume create /vol1
 
-ozone sh create bucket /vol1/bucket1
+ozone sh bucket create /vol1/bucket1
 ozone sh bucket link /vol1/bucket1 /s3v/common-bucket
 ```
 
diff --git a/hadoop-hdds/docs/content/recipe/Prometheus.md b/hadoop-hdds/docs/content/recipe/Prometheus.md
index f63b46e..9c852e0 100644
--- a/hadoop-hdds/docs/content/recipe/Prometheus.md
+++ b/hadoop-hdds/docs/content/recipe/Prometheus.md
@@ -46,9 +46,9 @@
 
 * Restart the Ozone Manager and Storage Container Manager and check the prometheus endpoints:
 
- * http://scm:9874/prom
+ * http://scm:9876/prom
 
- * http://ozoneManager:9876/prom
+ * http://ozoneManager:9874/prom
 
 * Create a prometheus.yaml configuration with the previous endpoints:
 
@@ -93,4 +93,4 @@
 cd compose/ozone
 export COMPOSE_FILE=docker-compose.yaml:monitoring.yaml
 docker-compose up -d
-```
\ No newline at end of file
+```
diff --git a/hadoop-hdds/docs/content/recipe/Prometheus.zh.md b/hadoop-hdds/docs/content/recipe/Prometheus.zh.md
index 069b340..bb64edc 100644
--- a/hadoop-hdds/docs/content/recipe/Prometheus.zh.md
+++ b/hadoop-hdds/docs/content/recipe/Prometheus.zh.md
@@ -44,9 +44,9 @@
 
 * 重启 OM 和 SCM,检查端点:
 
- * http://scm:9874/prom
+ * http://scm:9876/prom
 
- * http://ozoneManager:9876/prom
+ * http://ozoneManager:9874/prom
 
 * 根据这两个端点,创建 prometheus.yaml 配置文件:
 
@@ -91,4 +91,4 @@
 cd compose/ozone
 export COMPOSE_FILE=docker-compose.yaml:monitoring.yaml
 docker-compose up -d
-```
\ No newline at end of file
+```
diff --git a/hadoop-hdds/docs/content/security/SecuringOzoneHTTP.md b/hadoop-hdds/docs/content/security/SecuringOzoneHTTP.md
index b08536e..62916a9 100644
--- a/hadoop-hdds/docs/content/security/SecuringOzoneHTTP.md
+++ b/hadoop-hdds/docs/content/security/SecuringOzoneHTTP.md
@@ -128,17 +128,17 @@
 ### Enable SIMPLE authentication for SCM HTTP
 Property| Value
 -----------------------------------|-----------------------------------------
-ozone.scm.http.auth.type | simple
-ozone.scm.http.auth.simple.anonymous_allowed | false
+hdds.scm.http.auth.type | simple
+hdds.scm.http.auth.simple.anonymous_allowed | false
 
 If you don't want to specify the user.name in the query string parameter, 
-change ozone.scm.http.auth.simple.anonymous_allowed to true.
+change hdds.scm.http.auth.simple.anonymous_allowed to true.
 
 ### Enable SIMPLE authentication for DATANODE HTTP
 Property| Value
 -----------------------------------|-----------------------------------------
-ozone.datanode.http.auth.type | simple
-ozone.datanode.http.auth.simple.anonymous_allowed | false
+hdds.datanode.http.auth.type | simple
+hdds.datanode.http.auth.simple.anonymous_allowed | false
 
 If you don't want to specify the user.name in the query string parameter, 
-change ozone.datanode.http.auth.simple.anonymous_allowed to true.
+change hdds.datanode.http.auth.simple.anonymous_allowed to true.
diff --git a/hadoop-hdds/docs/content/security/SecurityWithRanger.md b/hadoop-hdds/docs/content/security/SecurityWithRanger.md
index 7daaf81..ee86a11 100644
--- a/hadoop-hdds/docs/content/security/SecurityWithRanger.md
+++ b/hadoop-hdds/docs/content/security/SecurityWithRanger.md
@@ -27,8 +27,9 @@
 
 
 Apache Ranger™ is a framework to enable, monitor and manage comprehensive data
-security across the Hadoop platform. Any version of Apache Ranger which is greater
-than 1.20 is aware of Ozone, and can manage an Ozone cluster.
+security across the Hadoop platform. Apache Ranger has supported Ozone authentication 
+since version 2.0. However, due to some bugs in 2.0, Apache Ranger 
+2.1 and later versions are recommended.
 
 
 To use Apache Ranger, you must have Apache Ranger installed in your Hadoop
@@ -44,3 +45,19 @@
 --------|------------------------------------------------------------
 ozone.acl.enabled         | true
 ozone.acl.authorizer.class| org.apache.ranger.authorization.ozone.authorizer.RangerOzoneAuthorizer
+
+The Ranger permissions corresponding to the Ozone operations are as follows:
+
+| operation&permission | Volume  permission | Bucket permission | Key permission |
+| :--- | :--- | :--- | :--- |
+| Create  volume | CREATE | | |
+| List volume | LIST | | |
+| Get volume Info | READ | | |
+| Delete volume | DELETE | | |
+| Create  bucket | READ | CREATE | |
+| List bucket | LIST, READ | | |
+| Get bucket info | READ | READ | |
+| Delete bucket | READ | DELETE | |
+| List key | READ | LIST, READ | |
+| Write key | READ | READ | CREATE, WRITE |
+| Read key | READ | READ | READ |
diff --git a/hadoop-hdds/docs/content/security/SecurityWithRanger.zh.md b/hadoop-hdds/docs/content/security/SecurityWithRanger.zh.md
index 4d40a17..e7ff33e 100644
--- a/hadoop-hdds/docs/content/security/SecurityWithRanger.zh.md
+++ b/hadoop-hdds/docs/content/security/SecurityWithRanger.zh.md
@@ -26,7 +26,7 @@
 -->
 
 
-Apache Ranger™ 是一个用于管理和监控 Hadoop 平台复杂数据权限的框架。版本大于 1.20 的 Apache Ranger 都可以用于管理 Ozone 集群。
+Apache Ranger™ 是一个用于管理和监控 Hadoop 平台复杂数据权限的框架。Apache Ranger 从2.0版本开始支持Ozone鉴权。但由于在2.0中存在一些bug,因此我们更推荐使用Apache Ranger 2.1及以后版本。
 
 你需要先在你的 Hadoop 集群上安装 Apache Ranger,安装指南可以参考 [Apache Ranger 官网](https://ranger.apache.org/index.html).
 
@@ -36,3 +36,19 @@
 --------|------------------------------------------------------------
 ozone.acl.enabled         | true
 ozone.acl.authorizer.class| org.apache.ranger.authorization.ozone.authorizer.RangerOzoneAuthorizer
+
+Ozone各类操作对应Ranger权限如下:
+
+| operation&permission | Volume  permission | Bucket permission | Key permission |
+| :--- | :--- | :--- | :--- |
+| Create volume | CREATE | | |
+| List volume | LIST | | |
+| Get volume Info | READ | | |
+| Delete volume | DELETE | | |
+| Create  bucket | READ | CREATE | |
+| List bucket | LIST, READ | | |
+| Get bucket info | READ | READ | |
+| Delete bucket | READ | DELETE | |
+| List key | READ | LIST, READ | |
+| Write key | READ | READ | CREATE, WRITE |
+| Read key | READ | READ | READ |
\ No newline at end of file
diff --git a/hadoop-hdds/docs/content/tools/AuditParser.md b/hadoop-hdds/docs/content/tools/AuditParser.md
index e4da208..ee2acd9 100644
--- a/hadoop-hdds/docs/content/tools/AuditParser.md
+++ b/hadoop-hdds/docs/content/tools/AuditParser.md
@@ -21,7 +21,7 @@
 -->
 
 Audit Parser tool can be used for querying the ozone audit logs.
-This tool creates a sqllite database at the specified path. If the database
+This tool creates a sqlite database at the specified path. If the database
 already exists, it will avoid creating a database.
 
 The database contains only one table called `audit` defined as:
diff --git a/hadoop-hdds/docs/dev-support/bin/generate-site.sh b/hadoop-hdds/docs/dev-support/bin/generate-site.sh
index 4dfbebc..3d7baa8 100755
--- a/hadoop-hdds/docs/dev-support/bin/generate-site.sh
+++ b/hadoop-hdds/docs/dev-support/bin/generate-site.sh
@@ -24,8 +24,15 @@
    exit 0
 fi
 
+export OZONE_VERSION=$(mvn help:evaluate -Dexpression=ozone.version -q -DforceStdout)
+
+ENABLE_GIT_INFO=
+if git -C $(pwd) status >& /dev/null; then
+  ENABLE_GIT_INFO="--enableGitInfo"
+fi
+
 DESTDIR="$DOCDIR/target/classes/docs"
 mkdir -p "$DESTDIR"
 cd "$DOCDIR"
-hugo -d "$DESTDIR" "$@"
+hugo "${ENABLE_GIT_INFO}" -d "$DESTDIR" "$@"
 cd -
diff --git a/hadoop-hdds/docs/pom.xml b/hadoop-hdds/docs/pom.xml
index 404b6c2..3a6aea0 100644
--- a/hadoop-hdds/docs/pom.xml
+++ b/hadoop-hdds/docs/pom.xml
@@ -24,8 +24,8 @@
   </parent>
   <artifactId>hadoop-hdds-docs</artifactId>
   <version>1.1.0-SNAPSHOT</version>
-  <description>Apache Hadoop HDDS/Ozone Documentation</description>
-  <name>Apache Hadoop HDDS/Ozone Documentation</name>
+  <description>Apache Ozone/HDDS Documentation</description>
+  <name>Apache Ozone/HDDS Documentation</name>
   <packaging>jar</packaging>
 
   <dependencies>
diff --git a/hadoop-hdds/docs/static/ozone-logo-monochrome.svg b/hadoop-hdds/docs/static/ozone-logo-monochrome.svg
index cd046a0..89cc166 100644
--- a/hadoop-hdds/docs/static/ozone-logo-monochrome.svg
+++ b/hadoop-hdds/docs/static/ozone-logo-monochrome.svg
@@ -28,7 +28,7 @@
    sodipodi:docname="ozone_bolt.svg"
    inkscape:version="0.92.4 (5da689c313, 2019-01-14)">
   <title
-     id="title39">Apache Hadoop Ozone Logo</title>
+     id="title39">Apache Ozone Logo</title>
   <metadata
      id="metadata32">
     <rdf:RDF>
@@ -37,7 +37,7 @@
         <dc:format>image/svg+xml</dc:format>
         <dc:type
            rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
-        <dc:title>Apache Hadoop Ozone Logo</dc:title>
+        <dc:title>Apache Ozone Logo</dc:title>
         <cc:license
            rdf:resource="https://www.apache.org/licenses/LICENSE-2.0" />
       </cc:Work>
diff --git a/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/baseof.html b/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/baseof.html
index c46f829..0d3810d 100644
--- a/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/baseof.html
+++ b/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/baseof.html
@@ -20,6 +20,7 @@
 
 {{ partial "navbar.html" . }}
 
+<div class="wrapper">
 <div class="container-fluid">
     <div class="row">
         {{ partial "sidebar.html" . }}
@@ -33,6 +34,7 @@
         </div>
     </div>
 </div>
+    <div class="push"></div>
 </div>
 
 {{ partial "footer.html" . }}
diff --git a/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/section.html b/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/section.html
index c4408d5..2963c80 100644
--- a/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/section.html
+++ b/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/section.html
@@ -20,6 +20,7 @@
 
 {{ partial "navbar.html" . }}
 
+<div class="wrapper">
 <div class="container-fluid">
     <div class="row">
         {{ partial "sidebar.html" . }}
@@ -67,6 +68,8 @@
         </div>
     </div>
 </div>
+    <div class="push"></div>
+</div>
 
 {{ partial "footer.html" . }}
 
diff --git a/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/single.html b/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/single.html
index 208a971..d4e439a 100644
--- a/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/single.html
+++ b/hadoop-hdds/docs/themes/ozonedoc/layouts/_default/single.html
@@ -20,6 +20,7 @@
 
   {{ partial "navbar.html" . }}
 
+  <div class="wrapper">
   <div class="container-fluid">
     <div class="row">
       {{ partial "sidebar.html" . }}
@@ -53,6 +54,8 @@
       </div>
     </div>
   </div>
+    <div class="push"></div>
+  </div>
 
   {{ partial "footer.html" . }}
 
diff --git a/hadoop-hdds/docs/themes/ozonedoc/layouts/index.html b/hadoop-hdds/docs/themes/ozonedoc/layouts/index.html
index 75725a2..3a6bce1 100644
--- a/hadoop-hdds/docs/themes/ozonedoc/layouts/index.html
+++ b/hadoop-hdds/docs/themes/ozonedoc/layouts/index.html
@@ -18,20 +18,22 @@
 
   <body>
 
-{{ partial "navbar.html" . }}
+    <div class="wrapper">
+        {{ partial "navbar.html" . }}
 
-    <div class="container-fluid">
-      <div class="row">
-        {{ partial "sidebar.html" . }}
-        <div class="col-sm-10 col-sm-offset-2 col-md-10 col-md-offset-2 main">
-            {{ partial "languages.html" .}}
+        <div class="container-fluid">
+          <div class="row">
+            {{ partial "sidebar.html" . }}
+            <div class="col-sm-10 col-sm-offset-2 col-md-10 col-md-offset-2 main">
+                {{ partial "languages.html" .}}
 
-            {{ .Content }}
+                {{ .Content }}
+            </div>
+          </div>
         </div>
-      </div>
+        <div class="push"></div>
     </div>
-
-{{ partial "footer.html" . }}
+    {{ partial "footer.html" . }}
 
   </body>
 </html>
diff --git a/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/footer.html b/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/footer.html
index 20bf76e..7683482 100644
--- a/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/footer.html
+++ b/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/footer.html
@@ -14,6 +14,15 @@
   See the License for the specific language governing permissions and
   limitations under the License.
 -->
+
+<footer class="footer">
+  <div class="container">
+    <span class="small text-muted">
+      Version: {{ getenv "OZONE_VERSION" }}{{ with .GitInfo }}, Last Modified: {{ .AuthorDate.Format "January 2, 2006" }} <a class="hide-child link primary-color" href="{{$.Site.Params.ghrepo}}commit/{{ .Hash }}">{{ .AbbreviatedHash }}</a>{{end }}
+    </span>
+  </div>
+</footer>
+
 <!-- Bootstrap core JavaScript
 ================================================== -->
 <!-- Placed at the end of the document so the pages load faster -->
diff --git a/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/header.html b/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/header.html
index a4e24c9..8f475b6 100644
--- a/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/header.html
+++ b/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/header.html
@@ -21,9 +21,9 @@
     <meta http-equiv="X-UA-Compatible" content="IE=edge">
     <meta name="viewport" content="width=device-width, initial-scale=1">
     <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
-    <meta name="description" content="Hadoop Ozone Documentation">
+    <meta name="description" content="Apache Ozone Documentation">
 
-    <title>Documentation for Apache Hadoop Ozone</title>
+    <title>Documentation for Apache Ozone</title>
 
     <!-- Bootstrap core CSS -->
     <link href="{{ "css/bootstrap.min.css" | relURL}}" rel="stylesheet">
diff --git a/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/navbar.html b/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/navbar.html
index f942e4a..d4c9f2e 100644
--- a/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/navbar.html
+++ b/hadoop-hdds/docs/themes/ozonedoc/layouts/partials/navbar.html
@@ -27,9 +27,9 @@
         <img src="{{ "ozone-logo-small.png" | relURL }}"/>
       </a>
       <a class="navbar-brand hidden-xs" href="{{ "index.html" | relLangURL }}">
-        Apache Hadoop Ozone/HDDS documentation
+        Apache Ozone/HDDS documentation
       </a>
-      <a class="navbar-brand visible-xs-inline" href="#">Hadoop Ozone</a>
+      <a class="navbar-brand visible-xs-inline" href="#">Apache Ozone</a>
     </div>
     <div id="navbar" class="navbar-collapse collapse">
       <ul class="nav navbar-nav navbar-right">
diff --git a/hadoop-hdds/docs/themes/ozonedoc/static/css/ozonedoc.css b/hadoop-hdds/docs/themes/ozonedoc/static/css/ozonedoc.css
index aa57c92..90068cc 100644
--- a/hadoop-hdds/docs/themes/ozonedoc/static/css/ozonedoc.css
+++ b/hadoop-hdds/docs/themes/ozonedoc/static/css/ozonedoc.css
@@ -20,6 +20,11 @@
  * Base structure
  */
 
+html, body {
+  height: 100%;
+  margin: 0;
+}
+
 /* Move down content because we have a fixed navbar that is 50px tall */
 body {
   padding-top: 50px;
@@ -181,4 +186,27 @@
 
 table.table {
   margin: 20px 20px 40px;
-}
\ No newline at end of file
+}
+
+.footer,
+.push {
+  height: 50px;
+}
+
+.footer {
+  background-color: #f5f5f5;
+}
+
+.wrapper {
+  min-height: 100%;
+
+  /* Equal to height of footer */
+  /* But also accounting for potential margin-bottom of last child */
+  margin-bottom: -50px;
+}
+
+.footer .container {
+  padding-top: 10px;
+  padding-bottom: 10px;
+  text-align: center;
+}
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
index 2df063f..52dc033 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
@@ -17,7 +17,10 @@
 package org.apache.hadoop.hdds.protocol;
 
 import java.io.IOException;
+import java.util.List;
+
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.OzoneManagerDetailsProto;
 import org.apache.hadoop.hdds.scm.ScmConfig;
@@ -77,4 +80,16 @@
    */
   String getCACertificate() throws IOException;
 
+  /**
+   * Get list of certificates meet the query criteria.
+   *
+   * @param type            - node type: OM/SCM/DN.
+   * @param startSerialId   - start certificate serial id.
+   * @param count           - max number of certificates returned in a batch.
+   * @param isRevoked       - whether list for revoked certs only.
+   * @return list of PEM encoded certificate strings.
+   */
+  List<String> listCertificate(HddsProtos.NodeType type, long startSerialId,
+      int count, boolean isRevoked) throws IOException;
+
 }
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java
index efe79a7..aeef50e 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java
@@ -18,9 +18,11 @@
 
 import java.io.Closeable;
 import java.io.IOException;
+import java.util.List;
 import java.util.function.Consumer;
 
 import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.OzoneManagerDetailsProto;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos;
@@ -28,6 +30,7 @@
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMGetCertResponseProto;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMGetCertificateRequestProto;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMGetDataNodeCertRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMListCertificateRequestProto;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMSecurityRequest;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMSecurityRequest.Builder;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMSecurityResponse;
@@ -202,6 +205,30 @@
   }
 
   /**
+   *
+   * @param role            - node type: OM/SCM/DN.
+   * @param startSerialId   - start cert serial id.
+   * @param count           - max number of certificates returned in a batch.
+   * @param isRevoked       - whether return revoked cert only.
+   * @return
+   * @throws IOException
+   */
+  @Override
+  public List<String> listCertificate(HddsProtos.NodeType role,
+      long startSerialId, int count, boolean isRevoked) throws IOException {
+    SCMListCertificateRequestProto protoIns = SCMListCertificateRequestProto
+        .newBuilder()
+        .setRole(role)
+        .setStartCertId(startSerialId)
+        .setCount(count)
+        .setIsRevoked(isRevoked)
+        .build();
+    return submitRequest(Type.ListCertificate,
+        builder -> builder.setListCertificateRequest(protoIns))
+        .getListCertificateResponseProto().getCertificatesList();
+  }
+
+  /**
    * Return the proxy object underlying this protocol translator.
    *
    * @return the proxy object underlying this protocol translator.
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
index 318b424..f21bfdb 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/scm/protocolPB/StorageContainerLocationProtocolClientSideTranslatorPB.java
@@ -67,6 +67,9 @@
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.ScmContainerLocationResponse;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StartReplicationManagerRequestProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StopReplicationManagerRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StartMaintenanceNodesRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.DecommissionNodesRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.RecommissionNodesRequestProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.Type;
 import org.apache.hadoop.hdds.scm.ScmInfo;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
@@ -292,23 +295,89 @@
   }
 
   /**
-   * Queries a list of Node Statuses.
+   * Queries a list of Nodes based on their operational state or health state.
+   * Passing a null for either value acts as a wildcard for that state.
+   *
+   * @param opState The operation state of the node
+   * @param nodeState The health of the node
+   * @return List of Datanodes.
    */
   @Override
-  public List<HddsProtos.Node> queryNode(HddsProtos.NodeState
-      nodeStatuses, HddsProtos.QueryScope queryScope, String poolName)
+  public List<HddsProtos.Node> queryNode(
+      HddsProtos.NodeOperationalState opState, HddsProtos.NodeState
+      nodeState, HddsProtos.QueryScope queryScope, String poolName)
       throws IOException {
     // TODO : We support only cluster wide query right now. So ignoring checking
     // queryScope and poolName
-    Preconditions.checkNotNull(nodeStatuses);
-    NodeQueryRequestProto request = NodeQueryRequestProto.newBuilder()
-        .setState(nodeStatuses)
+    NodeQueryRequestProto.Builder builder = NodeQueryRequestProto.newBuilder()
         .setTraceID(TracingUtil.exportCurrentSpan())
-        .setScope(queryScope).setPoolName(poolName).build();
+        .setScope(queryScope).setPoolName(poolName);
+    if (opState != null) {
+      builder.setOpState(opState);
+    }
+    if (nodeState != null) {
+      builder.setState(nodeState);
+    }
+    NodeQueryRequestProto request = builder.build();
     NodeQueryResponseProto response = submitRequest(Type.QueryNode,
-        builder -> builder.setNodeQueryRequest(request)).getNodeQueryResponse();
+        builder1 -> builder1.setNodeQueryRequest(request))
+        .getNodeQueryResponse();
     return response.getDatanodesList();
+  }
 
+  /**
+   * Attempts to decommission the list of nodes.
+   * @param nodes The list of hostnames or hostname:ports to decommission
+   * @throws IOException
+   */
+  @Override
+  public void decommissionNodes(List<String> nodes) throws IOException {
+    Preconditions.checkNotNull(nodes);
+    DecommissionNodesRequestProto request =
+        DecommissionNodesRequestProto.newBuilder()
+        .addAllHosts(nodes)
+        .build();
+    submitRequest(Type.DecommissionNodes,
+        builder -> builder.setDecommissionNodesRequest(request));
+  }
+
+  /**
+   * Attempts to recommission the list of nodes.
+   * @param nodes The list of hostnames or hostname:ports to recommission
+   * @throws IOException
+   */
+  @Override
+  public void recommissionNodes(List<String> nodes) throws IOException {
+    Preconditions.checkNotNull(nodes);
+    RecommissionNodesRequestProto request =
+        RecommissionNodesRequestProto.newBuilder()
+            .addAllHosts(nodes)
+            .build();
+    submitRequest(Type.RecommissionNodes,
+        builder -> builder.setRecommissionNodesRequest(request));
+  }
+
+  /**
+   * Attempts to put the list of nodes into maintenance mode.
+   *
+   * @param nodes The list of hostnames or hostname:ports to put into
+   *              maintenance
+   * @param endInHours A number of hours from now where the nodes will be taken
+   *                   out of maintenance automatically. Passing zero will
+   *                   allow the nodes to stay in maintenance indefinitely
+   * @throws IOException
+   */
+  @Override
+  public void startMaintenanceNodes(List<String> nodes, int endInHours)
+      throws IOException {
+    Preconditions.checkNotNull(nodes);
+    StartMaintenanceNodesRequestProto request =
+        StartMaintenanceNodesRequestProto.newBuilder()
+            .addAllHosts(nodes)
+            .setEndInHours(endInHours)
+            .build();
+    submitRequest(Type.StartMaintenanceNodes,
+        builder -> builder.setStartMaintenanceNodesRequest(request));
   }
 
   /**
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenVerifier.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenVerifier.java
index ea222df..0c2249a 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenVerifier.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenVerifier.java
@@ -21,6 +21,7 @@
 import com.google.common.base.Strings;
 import org.apache.hadoop.hdds.HddsUtils;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
 import org.apache.hadoop.hdds.security.x509.SecurityConfig;
 import org.apache.hadoop.hdds.security.x509.certificate.client.CertificateClient;
@@ -35,6 +36,13 @@
 import java.io.IOException;
 import java.security.cert.X509Certificate;
 
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type.GetBlock;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type.GetSmallFile;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type.PutBlock;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type.PutSmallFile;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type.ReadChunk;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type.WriteChunk;
+
 
 /**
  * Verify token and return a UGI with token if authenticated.
@@ -73,14 +81,13 @@
     OzoneBlockTokenIdentifier tokenId = new OzoneBlockTokenIdentifier();
     try {
       token.decodeFromUrlString(tokenStr);
-      if (LOGGER.isDebugEnabled()) {
-        LOGGER.debug("Verifying token:{} for user:{} ", token, user);
-      }
       ByteArrayInputStream buf = new ByteArrayInputStream(
           token.getIdentifier());
       DataInputStream in = new DataInputStream(buf);
       tokenId.readFields(in);
-
+      if (LOGGER.isDebugEnabled()) {
+        LOGGER.debug("Verifying token:{} for user:{} ", tokenId, user);
+      }
     } catch (IOException ex) {
       throw new BlockTokenException("Failed to decode token : " + tokenStr);
     }
@@ -118,7 +125,21 @@
           " by user: " + tokenUser);
     }
 
-    // TODO: check cmd type and the permissions(AccessMode) in the token
+    if (cmd == ReadChunk || cmd == GetBlock || cmd == GetSmallFile) {
+      if (!tokenId.getAccessModes().contains(
+          HddsProtos.BlockTokenSecretProto.AccessModeProto.READ)) {
+        throw new BlockTokenException("Block token with " + id
+            + " doesn't have READ permission");
+      }
+    } else if (cmd == WriteChunk || cmd == PutBlock || cmd == PutSmallFile) {
+      if (!tokenId.getAccessModes().contains(
+          HddsProtos.BlockTokenSecretProto.AccessModeProto.WRITE)) {
+        throw new BlockTokenException("Block token with " + id
+            + " doesn't have WRITE permission");
+      }
+    } else {
+      throw new BlockTokenException("Block token does not support " + cmd);
+    }
   }
 
   public static boolean isTestStub() {
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java
index b1d7d6b..76512c5 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java
@@ -19,6 +19,7 @@
 
 package org.apache.hadoop.hdds.security.x509.certificate.authority;
 
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
 import org.apache.hadoop.hdds.security.x509.SecurityConfig;
 import org.apache.hadoop.hdds.security.x509.certificate.authority.CertificateApprover.ApprovalType;
@@ -28,6 +29,7 @@
 import java.io.IOException;
 import java.security.cert.CertificateException;
 import java.security.cert.X509Certificate;
+import java.util.List;
 import java.util.concurrent.Future;
 
 /**
@@ -112,6 +114,16 @@
    * framework.
    */
 
+  /**
+   * List certificates.
+   * @param type            - node type: OM/SCM/DN
+   * @param startSerialId   - start certificate serial id
+   * @param count           - max number of certificates returned in a batch
+   * @return
+   * @throws IOException
+   */
+  List<X509Certificate> listCertificate(HddsProtos.NodeType type,
+      long startSerialId, int count, boolean isRevoked) throws IOException;
 
   /**
    * Make it explicit what type of CertificateServer we are creating here.
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateStore.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateStore.java
index 961d048..3ddb640 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateStore.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateStore.java
@@ -19,9 +19,12 @@
 
 package org.apache.hadoop.hdds.security.x509.certificate.authority;
 
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
 import java.io.IOException;
 import java.math.BigInteger;
 import java.security.cert.X509Certificate;
+import java.util.List;
 
 /**
  * This interface allows the DefaultCA to be portable and use different DB
@@ -70,6 +73,19 @@
       throws IOException;
 
   /**
+   *
+   * @param role - role of the certificate owner (OM/DN).
+   * @param startSerialID - start cert serial id.
+   * @param count - max number of certs returned.
+   * @param certType cert type (valid/revoked).
+   * @return list of X509 certificates.
+   * @throws IOException
+   */
+  List<X509Certificate> listCertificate(HddsProtos.NodeType role,
+      BigInteger startSerialID, int count, CertType certType)
+      throws IOException;
+
+  /**
    * Different kind of Certificate stores.
    */
   enum CertType {
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/DefaultCAServer.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/DefaultCAServer.java
index 2378260..0523209 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/DefaultCAServer.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/DefaultCAServer.java
@@ -22,6 +22,7 @@
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import org.apache.commons.validator.routines.DomainValidator;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
 import org.apache.hadoop.hdds.security.x509.SecurityConfig;
 import org.apache.hadoop.hdds.security.x509.certificate.authority.PKIProfiles.DefaultProfile;
@@ -51,6 +52,7 @@
 import java.time.LocalDate;
 import java.time.LocalDateTime;
 import java.time.LocalTime;
+import java.util.List;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.Future;
 import java.util.function.Consumer;
@@ -289,6 +291,23 @@
   }
 
   /**
+   *
+   * @param role            - node type: OM/SCM/DN.
+   * @param startSerialId   - start cert serial id.
+   * @param count           - max number of certificates returned in a batch.
+   * @param isRevoked       - whether return revoked cert only.
+   * @return
+   * @throws IOException
+   */
+  @Override
+  public List<X509Certificate> listCertificate(HddsProtos.NodeType role,
+      long startSerialId, int count, boolean isRevoked) throws IOException {
+    return store.listCertificate(role, BigInteger.valueOf(startSerialId), count,
+        isRevoked? CertificateStore.CertType.REVOKED_CERTS :
+            CertificateStore.CertType.VALID_CERTS);
+  }
+
+  /**
    * Generates a Self Signed CertificateServer. These are the steps in
    * generating a Self-Signed CertificateServer.
    * <p>
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HtmlQuoting.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HtmlQuoting.java
index f4262f9..44a1d00 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HtmlQuoting.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HtmlQuoting.java
@@ -127,7 +127,7 @@
       ByteArrayOutputStream buffer = new ByteArrayOutputStream();
       try {
         quoteHtmlChars(buffer, bytes, 0, bytes.length);
-        return buffer.toString("UTF-8");
+        return buffer.toString(StandardCharsets.UTF_8.name());
       } catch (IOException ioe) {
         // Won't happen, since it is a bytearrayoutputstream
         return null;
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
index 9282c84..9aad94a 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/HttpServer2.java
@@ -38,6 +38,7 @@
 import java.net.MalformedURLException;
 import java.net.URI;
 import java.net.URL;
+import java.nio.charset.StandardCharsets;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Enumeration;
@@ -1522,7 +1523,7 @@
       }
       response.setContentType("text/plain; charset=UTF-8");
       try (PrintStream out = new PrintStream(
-          response.getOutputStream(), false, "UTF-8")) {
+          response.getOutputStream(), false, StandardCharsets.UTF_8.name())) {
         ReflectionUtils.printThreadInfo(out, "");
       }
       ReflectionUtils.logThreadInfo(LOG, "jsp requested", 1);
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/RatisNameRewriteSampleBuilder.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/RatisNameRewriteSampleBuilder.java
index cbee652..e3fb737 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/RatisNameRewriteSampleBuilder.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/http/RatisNameRewriteSampleBuilder.java
@@ -26,7 +26,7 @@
 import io.prometheus.client.Collector.MetricFamilySamples.Sample;
 import io.prometheus.client.dropwizard.samplebuilder.DefaultSampleBuilder;
 import org.apache.logging.log4j.util.Strings;
-import static org.apache.ratis.server.metrics.RaftLogMetrics.RATIS_APPLICATION_NAME_METRICS;
+import static org.apache.ratis.metrics.RatisMetrics.RATIS_APPLICATION_NAME_METRICS;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/DBStore.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/DBStore.java
index 71766bd..f0096ed 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/DBStore.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/DBStore.java
@@ -25,7 +25,7 @@
 import java.util.Map;
 
 import org.apache.hadoop.hdds.annotation.InterfaceStability;
-import org.apache.hadoop.hdds.utils.db.cache.TableCacheImpl;
+import org.apache.hadoop.hdds.utils.db.cache.TableCache;
 
 /**
  * The DBStore interface provides the ability to create Tables, which store
@@ -49,8 +49,7 @@
 
   /**
    * Gets an existing TableStore with implicit key/value conversion and
-   * with default cleanup policy for cache. Default cache clean up policy is
-   * manual.
+   * with default cache type for cache. Default cache type is partial cache.
    *
    * @param name - Name of the TableStore to get
    * @param keyType
@@ -63,12 +62,17 @@
 
   /**
    * Gets an existing TableStore with implicit key/value conversion and
-   * with specified cleanup policy for cache.
+   * with specified cache type.
+   * @param name - Name of the TableStore to get
+   * @param keyType
+   * @param valueType
+   * @param cacheType
+   * @return - TableStore.
    * @throws IOException
    */
   <KEY, VALUE> Table<KEY, VALUE> getTable(String name,
       Class<KEY> keyType, Class<VALUE> valueType,
-      TableCacheImpl.CacheCleanupPolicy cleanupPolicy) throws IOException;
+      TableCache.CacheType cacheType) throws IOException;
 
   /**
    * Lists the Known list of Tables in a DB.
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBStore.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBStore.java
index adbd2eb..252363c 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBStore.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/RDBStore.java
@@ -33,10 +33,10 @@
 import org.apache.hadoop.hdds.HddsUtils;
 import org.apache.hadoop.hdds.StringUtils;
 import org.apache.hadoop.hdds.utils.RocksDBStoreMBean;
+import org.apache.hadoop.hdds.utils.db.cache.TableCache;
 import org.apache.hadoop.metrics2.util.MBeans;
 
 import com.google.common.base.Preconditions;
-import org.apache.hadoop.hdds.utils.db.cache.TableCacheImpl;
 import org.apache.ratis.thirdparty.com.google.common.annotations.VisibleForTesting;
 import org.rocksdb.ColumnFamilyDescriptor;
 import org.rocksdb.ColumnFamilyHandle;
@@ -310,9 +310,9 @@
   @Override
   public <K, V> Table<K, V> getTable(String name,
       Class<K> keyType, Class<V> valueType,
-      TableCacheImpl.CacheCleanupPolicy cleanupPolicy) throws IOException {
+      TableCache.CacheType cacheType) throws IOException {
     return new TypedTable<>(getTable(name), codecRegistry, keyType,
-        valueType, cleanupPolicy);
+        valueType, cacheType);
   }
 
   @Override
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/TypedTable.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/TypedTable.java
index 1c88290..5e44384 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/TypedTable.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/TypedTable.java
@@ -30,9 +30,10 @@
 import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
 import org.apache.hadoop.hdds.utils.db.cache.CacheResult;
 import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
-import org.apache.hadoop.hdds.utils.db.cache.TableCacheImpl;
+import org.apache.hadoop.hdds.utils.db.cache.FullTableCache;
+import org.apache.hadoop.hdds.utils.db.cache.PartialTableCache;
+import org.apache.hadoop.hdds.utils.db.cache.TableCache.CacheType;
 import org.apache.hadoop.hdds.utils.db.cache.TableCache;
-import org.apache.hadoop.hdds.utils.db.cache.TableCacheImpl.CacheCleanupPolicy;
 
 import static org.apache.hadoop.hdds.utils.db.cache.CacheResult.CacheStatus.EXISTS;
 import static org.apache.hadoop.hdds.utils.db.cache.CacheResult.CacheStatus.NOT_EXIST;
@@ -61,8 +62,7 @@
 
   /**
    * Create an TypedTable from the raw table.
-   * Default cleanup policy used for the table is
-   * {@link CacheCleanupPolicy#MANUAL}.
+   * Default cache type for the table is {@link CacheType#PARTIAL_CACHE}.
    * @param rawTable
    * @param codecRegistry
    * @param keyType
@@ -73,30 +73,30 @@
       CodecRegistry codecRegistry, Class<KEY> keyType,
       Class<VALUE> valueType) throws IOException {
     this(rawTable, codecRegistry, keyType, valueType,
-        CacheCleanupPolicy.MANUAL);
+        CacheType.PARTIAL_CACHE);
   }
 
   /**
-   * Create an TypedTable from the raw table with specified cleanup policy
-   * for table cache.
+   * Create an TypedTable from the raw table with specified cache type.
    * @param rawTable
    * @param codecRegistry
    * @param keyType
    * @param valueType
-   * @param cleanupPolicy
+   * @param cacheType
+   * @throws IOException
    */
   public TypedTable(
       Table<byte[], byte[]> rawTable,
       CodecRegistry codecRegistry, Class<KEY> keyType,
       Class<VALUE> valueType,
-      TableCacheImpl.CacheCleanupPolicy cleanupPolicy) throws IOException {
+      CacheType cacheType) throws IOException {
     this.rawTable = rawTable;
     this.codecRegistry = codecRegistry;
     this.keyType = keyType;
     this.valueType = valueType;
-    cache = new TableCacheImpl<>(cleanupPolicy);
 
-    if (cleanupPolicy == CacheCleanupPolicy.NEVER) {
+    if (cacheType == CacheType.FULL_CACHE) {
+      cache = new FullTableCache<>();
       //fill cache
       try(TableIterator<KEY, ? extends KeyValue<KEY, VALUE>> tableIterator =
               iterator()) {
@@ -111,6 +111,8 @@
               new CacheValue<>(Optional.of(kv.getValue()), EPOCH_DEFAULT));
         }
       }
+    } else {
+      cache = new PartialTableCache<>();
     }
   }
 
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/FullTableCache.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/FullTableCache.java
new file mode 100644
index 0000000..2754b59
--- /dev/null
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/FullTableCache.java
@@ -0,0 +1,190 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.hdds.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ConcurrentSkipListSet;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadFactory;
+import java.util.concurrent.locks.ReadWriteLock;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.hadoop.hdds.annotation.InterfaceAudience.Private;
+import org.apache.hadoop.hdds.annotation.InterfaceStability.Evolving;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Cache implementation for the table. Full Table cache, where the DB state
+ * and cache state will be same for these tables.
+ */
+@Private
+@Evolving
+public class FullTableCache<CACHEKEY extends CacheKey,
+    CACHEVALUE extends CacheValue> implements TableCache<CACHEKEY, CACHEVALUE> {
+
+  public static final Logger LOG =
+      LoggerFactory.getLogger(FullTableCache.class);
+
+  private final Map<CACHEKEY, CACHEVALUE> cache;
+  private final NavigableSet<EpochEntry<CACHEKEY>> epochEntries;
+  private ExecutorService executorService;
+
+  private final ReadWriteLock lock;
+
+
+  public FullTableCache() {
+    // As for full table cache only we need elements to be inserted in sorted
+    // manner, so that list will be easy. But look ups have log(N) time
+    // complexity.
+
+    // Here lock is required to protect cache because cleanup is not done
+    // under any ozone level locks like bucket/volume, there is a chance of
+    // cleanup which are not flushed to disks when request processing thread
+    // updates entries.
+    cache = new ConcurrentSkipListMap<>();
+
+    lock = new ReentrantReadWriteLock();
+
+    epochEntries = new ConcurrentSkipListSet<>();
+
+    // Created a singleThreadExecutor, so one cleanup will be running at a
+    // time.
+    ThreadFactory build = new ThreadFactoryBuilder().setDaemon(true)
+        .setNameFormat("FullTableCache Cleanup Thread - %d").build();
+    executorService = Executors.newSingleThreadExecutor(build);
+  }
+
+  @Override
+  public CACHEVALUE get(CACHEKEY cachekey) {
+    try {
+      lock.readLock().lock();
+      return cache.get(cachekey);
+    } finally {
+      lock.readLock().unlock();
+    }
+  }
+
+  @Override
+  public void loadInitial(CACHEKEY cacheKey, CACHEVALUE cacheValue) {
+    // No need to add entry to epochEntries. Adding to cache is required during
+    // normal put operation.
+    // No need of acquiring lock, this is performed only during startup. No
+    // operations happening at that time.
+    cache.put(cacheKey, cacheValue);
+  }
+
+  @Override
+  public void put(CACHEKEY cacheKey, CACHEVALUE value) {
+    try {
+      lock.writeLock().lock();
+      cache.put(cacheKey, value);
+      epochEntries.add(new EpochEntry<>(value.getEpoch(), cacheKey));
+    } finally {
+      lock.writeLock().unlock();
+    }
+  }
+
+  public void cleanup(List<Long> epochs) {
+    executorService.execute(() -> evictCache(epochs));
+  }
+
+  @Override
+  public int size() {
+    return cache.size();
+  }
+
+  @Override
+  public Iterator<Map.Entry<CACHEKEY, CACHEVALUE>> iterator() {
+    return cache.entrySet().iterator();
+  }
+
+  @VisibleForTesting
+  public void evictCache(List<Long> epochs) {
+    EpochEntry<CACHEKEY> currentEntry;
+    CACHEKEY cachekey;
+    long lastEpoch = epochs.get(epochs.size() - 1);
+    for (Iterator<EpochEntry<CACHEKEY>> iterator = epochEntries.iterator();
+         iterator.hasNext();) {
+      currentEntry = iterator.next();
+      cachekey = currentEntry.getCachekey();
+      long currentEpoch = currentEntry.getEpoch();
+
+      // If currentEntry epoch is greater than last epoch provided, we have
+      // deleted all entries less than specified epoch. So, we can break.
+      if (currentEpoch > lastEpoch) {
+        break;
+      }
+
+      // Acquire lock to avoid race between cleanup and add to cache entry by
+      // client requests.
+      try {
+        lock.writeLock().lock();
+        if (epochs.contains(currentEpoch)) {
+          // Remove epoch entry, as the entry is there in epoch list.
+          iterator.remove();
+          // Remove only entries which are marked for delete from the cache.
+          cache.computeIfPresent(cachekey, ((k, v) -> {
+            if (v.getCacheValue() == null && v.getEpoch() == currentEpoch) {
+              LOG.debug("CacheKey {} with epoch {} is removed from cache",
+                  k.getCacheKey(), currentEpoch);
+              return null;
+            }
+            return v;
+          }));
+        }
+      } finally {
+        lock.writeLock().unlock();
+      }
+
+    }
+  }
+
+  public CacheResult<CACHEVALUE> lookup(CACHEKEY cachekey) {
+
+    CACHEVALUE cachevalue = cache.get(cachekey);
+    if (cachevalue == null) {
+      return new CacheResult<>(CacheResult.CacheStatus.NOT_EXIST, null);
+    } else {
+      if (cachevalue.getCacheValue() != null) {
+        return new CacheResult<>(CacheResult.CacheStatus.EXISTS, cachevalue);
+      } else {
+        // When entity is marked for delete, cacheValue will be set to null.
+        // In that case we can return NOT_EXIST irrespective of cache cleanup
+        // policy.
+        return new CacheResult<>(CacheResult.CacheStatus.NOT_EXIST, null);
+      }
+    }
+  }
+
+  @VisibleForTesting
+  public Set<EpochEntry<CACHEKEY>> getEpochEntrySet() {
+    return epochEntries;
+  }
+
+}
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/PartialTableCache.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/PartialTableCache.java
new file mode 100644
index 0000000..0bf03c5
--- /dev/null
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/PartialTableCache.java
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.hdds.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentSkipListSet;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ThreadFactory;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.hadoop.hdds.annotation.InterfaceAudience.Private;
+import org.apache.hadoop.hdds.annotation.InterfaceStability.Evolving;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Cache implementation for the table. Partial Table cache, where the DB state
+ * and cache state will not be same. Partial table cache holds entries until
+ * flush to DB happens.
+ */
+@Private
+@Evolving
+public class PartialTableCache<CACHEKEY extends CacheKey,
+    CACHEVALUE extends CacheValue> implements TableCache<CACHEKEY, CACHEVALUE> {
+
+  public static final Logger LOG =
+      LoggerFactory.getLogger(PartialTableCache.class);
+
+  private final Map<CACHEKEY, CACHEVALUE> cache;
+  private final NavigableSet<EpochEntry<CACHEKEY>> epochEntries;
+  private ExecutorService executorService;
+
+
+  public PartialTableCache() {
+    // We use concurrent Hash map for O(1) lookup for get API.
+    // During list operation for partial cache we anyway merge between DB and
+    // cache state. So entries in cache does not need to be in sorted order.
+
+    // And as concurrentHashMap computeIfPresent which is used by cleanup is
+    // atomic operation, and ozone level locks like bucket/volume locks
+    // protect updating same key, here it is not required to hold cache
+    // level locks during update/cleanup operation.
+
+    // 1. During update, it is caller responsibility to hold volume/bucket
+    // locks.
+    // 2. During cleanup which removes entry, while request is updating cache
+    // that should be guarded by concurrentHashMap guaranty.
+    cache = new ConcurrentHashMap<>();
+
+    epochEntries = new ConcurrentSkipListSet<>();
+    // Created a singleThreadExecutor, so one cleanup will be running at a
+    // time.
+    ThreadFactory build = new ThreadFactoryBuilder().setDaemon(true)
+        .setNameFormat("PartialTableCache Cleanup Thread - %d").build();
+    executorService = Executors.newSingleThreadExecutor(build);
+  }
+
+  @Override
+  public CACHEVALUE get(CACHEKEY cachekey) {
+    return cache.get(cachekey);
+  }
+
+  @Override
+  public void loadInitial(CACHEKEY cacheKey, CACHEVALUE cacheValue) {
+    // Do nothing for partial table cache.
+  }
+
+  @Override
+  public void put(CACHEKEY cacheKey, CACHEVALUE value) {
+    cache.put(cacheKey, value);
+    epochEntries.add(new EpochEntry<>(value.getEpoch(), cacheKey));
+  }
+
+  public void cleanup(List<Long> epochs) {
+    executorService.execute(() -> evictCache(epochs));
+  }
+
+  @Override
+  public int size() {
+    return cache.size();
+  }
+
+  @Override
+  public Iterator<Map.Entry<CACHEKEY, CACHEVALUE>> iterator() {
+    return cache.entrySet().iterator();
+  }
+
+  @VisibleForTesting
+  public void evictCache(List<Long> epochs) {
+    EpochEntry<CACHEKEY> currentEntry;
+    CACHEKEY cachekey;
+    long lastEpoch = epochs.get(epochs.size() - 1);
+    for (Iterator<EpochEntry<CACHEKEY>> iterator = epochEntries.iterator();
+         iterator.hasNext();) {
+      currentEntry = iterator.next();
+      cachekey = currentEntry.getCachekey();
+      long currentEpoch = currentEntry.getEpoch();
+
+      // If currentEntry epoch is greater than last epoch provided, we have
+      // deleted all entries less than specified epoch. So, we can break.
+      if (currentEpoch > lastEpoch) {
+        break;
+      }
+
+      // As ConcurrentHashMap computeIfPresent is atomic, there is no race
+      // condition between cache cleanup and requests updating same cache entry.
+      if (epochs.contains(currentEpoch)) {
+        // Remove epoch entry, as the entry is there in epoch list.
+        iterator.remove();
+        cache.computeIfPresent(cachekey, ((k, v) -> {
+          // If cache epoch entry matches with current Epoch, remove entry
+          // from cache.
+          if (v.getEpoch() == currentEpoch) {
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("CacheKey {} with epoch {} is removed from cache",
+                  k.getCacheKey(), currentEpoch);
+            }
+            return null;
+          }
+          return v;
+        }));
+      }
+    }
+  }
+
+  public CacheResult<CACHEVALUE> lookup(CACHEKEY cachekey) {
+
+    CACHEVALUE cachevalue = cache.get(cachekey);
+    if (cachevalue == null) {
+      return new CacheResult<>(CacheResult.CacheStatus.MAY_EXIST,
+            null);
+    } else {
+      if (cachevalue.getCacheValue() != null) {
+        return new CacheResult<>(CacheResult.CacheStatus.EXISTS, cachevalue);
+      } else {
+        // When entity is marked for delete, cacheValue will be set to null.
+        return new CacheResult<>(CacheResult.CacheStatus.NOT_EXIST, null);
+      }
+    }
+  }
+
+  @VisibleForTesting
+  public Set<EpochEntry<CACHEKEY>> getEpochEntrySet() {
+    return epochEntries;
+  }
+
+}
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCache.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCache.java
index 8acb708..ab4b73d 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCache.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCache.java
@@ -47,9 +47,9 @@
   CACHEVALUE get(CACHEKEY cacheKey);
 
   /**
-   * This method should be called for tables with cache cleanup policy
-   * {@link TableCacheImpl.CacheCleanupPolicy#NEVER} after system restart to
-   * fill up the cache.
+   * This method should be called for tables with cache type full cache.
+   * {@link TableCache.CacheType#FULL_CACHE} after system
+   * restart to fill up the cache.
    * @param cacheKey
    * @param cacheValue
    */
@@ -73,6 +73,9 @@
    */
   void cleanup(List<Long> epochs);
 
+  @VisibleForTesting
+  void evictCache(List<Long> epochs);
+
   /**
    * Return the size of the cache.
    * @return size
@@ -92,15 +95,13 @@
    * {@link CacheResult.CacheStatus#EXISTS}
    *
    * If it does not exist:
-   *  If cache clean up policy is
-   *  {@link TableCacheImpl.CacheCleanupPolicy#NEVER} it means table cache is
-   *  full cache. It return's {@link CacheResult} with null
-   *  and status as {@link CacheResult.CacheStatus#NOT_EXIST}.
+   *  If cache type is
+   *  {@link TableCache.CacheType#FULL_CACHE}. It return's {@link CacheResult}
+   *  with null and status as {@link CacheResult.CacheStatus#NOT_EXIST}.
    *
-   *  If cache clean up policy is
-   *  {@link TableCacheImpl.CacheCleanupPolicy#MANUAL} it means
-   *  table cache is partial cache. It return's {@link CacheResult} with
-   *  null and status as MAY_EXIST.
+   *  If cache type is
+   *  {@link TableCache.CacheType#PARTIAL_CACHE}.
+   *  It return's {@link CacheResult} with null and status as MAY_EXIST.
    *
    * @param cachekey
    */
@@ -109,4 +110,11 @@
 
   @VisibleForTesting
   Set<EpochEntry<CACHEKEY>> getEpochEntrySet();
+
+  enum CacheType {
+    FULL_CACHE, //  This mean's the table maintains full cache. Cache and DB
+    // state are same.
+    PARTIAL_CACHE // This is partial table cache, cache state is partial state
+    // compared to DB state.
+  }
 }
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java
deleted file mode 100644
index d35522d..0000000
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java
+++ /dev/null
@@ -1,205 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *      http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- *
- */
-
-package org.apache.hadoop.hdds.utils.db.cache;
-
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.NavigableSet;
-import java.util.Set;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentSkipListMap;
-import java.util.concurrent.ConcurrentSkipListSet;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ThreadFactory;
-import java.util.concurrent.atomic.AtomicBoolean;
-
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.util.concurrent.ThreadFactoryBuilder;
-import org.apache.hadoop.hdds.annotation.InterfaceAudience.Private;
-import org.apache.hadoop.hdds.annotation.InterfaceStability.Evolving;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * Cache implementation for the table. Depending on the cache clean up policy
- * this cache will be full cache or partial cache.
- *
- * If cache cleanup policy is set as {@link CacheCleanupPolicy#MANUAL},
- * this will be a partial cache.
- *
- * If cache cleanup policy is set as {@link CacheCleanupPolicy#NEVER},
- * this will be a full cache.
- */
-@Private
-@Evolving
-public class TableCacheImpl<CACHEKEY extends CacheKey,
-    CACHEVALUE extends CacheValue> implements TableCache<CACHEKEY, CACHEVALUE> {
-
-  public static final Logger LOG =
-      LoggerFactory.getLogger(TableCacheImpl.class);
-
-  private final Map<CACHEKEY, CACHEVALUE> cache;
-  private final NavigableSet<EpochEntry<CACHEKEY>> epochEntries;
-  private ExecutorService executorService;
-  private CacheCleanupPolicy cleanupPolicy;
-
-
-
-  public TableCacheImpl(CacheCleanupPolicy cleanupPolicy) {
-
-    // As for full table cache only we need elements to be inserted in sorted
-    // manner, so that list will be easy. For other we can go with Hash map.
-    if (cleanupPolicy == CacheCleanupPolicy.NEVER) {
-      cache = new ConcurrentSkipListMap<>();
-    } else {
-      cache = new ConcurrentHashMap<>();
-    }
-    epochEntries = new ConcurrentSkipListSet<>();
-    // Created a singleThreadExecutor, so one cleanup will be running at a
-    // time.
-    ThreadFactory build = new ThreadFactoryBuilder().setDaemon(true)
-        .setNameFormat("PartialTableCache Cleanup Thread - %d").build();
-    executorService = Executors.newSingleThreadExecutor(build);
-    this.cleanupPolicy = cleanupPolicy;
-  }
-
-  @Override
-  public CACHEVALUE get(CACHEKEY cachekey) {
-    return cache.get(cachekey);
-  }
-
-  @Override
-  public void loadInitial(CACHEKEY cacheKey, CACHEVALUE cacheValue) {
-    // No need to add entry to epochEntries. Adding to cache is required during
-    // normal put operation.
-    cache.put(cacheKey, cacheValue);
-  }
-
-  @Override
-  public void put(CACHEKEY cacheKey, CACHEVALUE value) {
-    cache.put(cacheKey, value);
-    epochEntries.add(new EpochEntry<>(value.getEpoch(), cacheKey));
-  }
-
-  public void cleanup(List<Long> epochs) {
-    executorService.execute(() -> evictCache(epochs));
-  }
-
-  @Override
-  public int size() {
-    return cache.size();
-  }
-
-  @Override
-  public Iterator<Map.Entry<CACHEKEY, CACHEVALUE>> iterator() {
-    return cache.entrySet().iterator();
-  }
-
-  @VisibleForTesting
-  protected void evictCache(List<Long> epochs) {
-    EpochEntry<CACHEKEY> currentEntry;
-    final AtomicBoolean removed = new AtomicBoolean();
-    CACHEKEY cachekey;
-    long lastEpoch = epochs.get(epochs.size() - 1);
-    for (Iterator<EpochEntry<CACHEKEY>> iterator = epochEntries.iterator();
-         iterator.hasNext();) {
-      currentEntry = iterator.next();
-      cachekey = currentEntry.getCachekey();
-      long currentEpoch = currentEntry.getEpoch();
-      CacheValue cacheValue = cache.computeIfPresent(cachekey, ((k, v) -> {
-        if (cleanupPolicy == CacheCleanupPolicy.MANUAL) {
-          if (v.getEpoch() == currentEpoch && epochs.contains(v.getEpoch())) {
-            LOG.debug("CacheKey {} with epoch {} is removed from cache",
-                k.getCacheKey(), currentEpoch);
-            iterator.remove();
-            removed.set(true);
-            return null;
-          }
-        } else if (cleanupPolicy == CacheCleanupPolicy.NEVER) {
-          // Remove only entries which are marked for delete.
-          if (v.getEpoch() == currentEpoch && epochs.contains(v.getEpoch())
-              && v.getCacheValue() == null) {
-            LOG.debug("CacheKey {} with epoch {} is removed from cache",
-                k.getCacheKey(), currentEpoch);
-            removed.set(true);
-            iterator.remove();
-            return null;
-          }
-        }
-        return v;
-      }));
-
-      // If override entries, then for those epoch entries, there will be no
-      // entry in cache. This can occur in the case we have cleaned up the
-      // override cache entry, but in epoch entry it is still lying around.
-      // This is done to cleanup epoch entries.
-      if (!removed.get() && cacheValue == null) {
-        LOG.debug("CacheKey {} with epoch {} is removed from epochEntry for " +
-                "a key not existing in cache", cachekey.getCacheKey(),
-            currentEpoch);
-        iterator.remove();
-      } else if (currentEpoch >= lastEpoch) {
-        // If currentEntry epoch is greater than last epoch provided, we have
-        // deleted all entries less than specified epoch. So, we can break.
-        break;
-      }
-      removed.set(false);
-    }
-  }
-
-  public CacheResult<CACHEVALUE> lookup(CACHEKEY cachekey) {
-
-    CACHEVALUE cachevalue = cache.get(cachekey);
-    if (cachevalue == null) {
-      if (cleanupPolicy == CacheCleanupPolicy.NEVER) {
-        return new CacheResult<>(CacheResult.CacheStatus.NOT_EXIST, null);
-      } else {
-        return new CacheResult<>(CacheResult.CacheStatus.MAY_EXIST,
-            null);
-      }
-    } else {
-      if (cachevalue.getCacheValue() != null) {
-        return new CacheResult<>(CacheResult.CacheStatus.EXISTS, cachevalue);
-      } else {
-        // When entity is marked for delete, cacheValue will be set to null.
-        // In that case we can return NOT_EXIST irrespective of cache cleanup
-        // policy.
-        return new CacheResult<>(CacheResult.CacheStatus.NOT_EXIST, null);
-      }
-    }
-  }
-
-  @VisibleForTesting
-  public Set<EpochEntry<CACHEKEY>> getEpochEntrySet() {
-    return epochEntries;
-  }
-
-  /**
-   * Cleanup policies for table cache.
-   */
-  public enum CacheCleanupPolicy {
-    NEVER, // Cache will not be cleaned up. This mean's the table maintains
-    // full cache.
-    MANUAL // Cache will be cleaned up, once after flushing to DB. It is
-    // caller's responsibility to flush to DB, before calling cleanup cache.
-  }
-}
diff --git a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/authority/MockCAStore.java b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/authority/MockCAStore.java
index 1dea512..633ae19 100644
--- a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/authority/MockCAStore.java
+++ b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/authority/MockCAStore.java
@@ -19,9 +19,13 @@
 
 package org.apache.hadoop.hdds.security.x509.certificate.authority;
 
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
 import java.io.IOException;
 import java.math.BigInteger;
 import java.security.cert.X509Certificate;
+import java.util.Collections;
+import java.util.List;
 
 /**
  *
@@ -51,4 +55,11 @@
       throws IOException {
     return null;
   }
+
+  @Override
+  public List<X509Certificate> listCertificate(HddsProtos.NodeType role,
+      BigInteger startSerialID, int count, CertType certType)
+      throws IOException {
+    return Collections.emptyList();
+  }
 }
diff --git a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestDefaultCertificateClient.java b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestDefaultCertificateClient.java
index f389cdb..053520a 100644
--- a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestDefaultCertificateClient.java
+++ b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/security/x509/certificate/client/TestDefaultCertificateClient.java
@@ -50,6 +50,7 @@
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.LambdaTestUtils;
 
+import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.*;
 import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_METADATA_DIR_NAME;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_NAMES;
@@ -75,7 +76,6 @@
   private Path dnMetaDirPath;
   private SecurityConfig omSecurityConfig;
   private SecurityConfig dnSecurityConfig;
-  private final static String UTF = "UTF-8";
   private final static String DN_COMPONENT = DNCertificateClient.COMPONENT_NAME;
   private final static String OM_COMPONENT = OMCertificateClient.COMPONENT_NAME;
   private KeyCodec omKeyCodec;
@@ -201,7 +201,7 @@
 
   @Test
   public void testSignDataStream() throws Exception {
-    String data = RandomStringUtils.random(100, UTF);
+    String data = RandomStringUtils.random(100);
     FileUtils.deleteQuietly(Paths.get(
         omSecurityConfig.getKeyLocation(OM_COMPONENT).toString(),
         omSecurityConfig.getPrivateKeyFileName()).toFile());
@@ -212,13 +212,12 @@
     // Expect error when there is no private key to sign.
     LambdaTestUtils.intercept(IOException.class, "Error while " +
             "signing the stream",
-        () -> omCertClient.signDataStream(IOUtils.toInputStream(data,
-            UTF)));
+        () -> omCertClient.signDataStream(IOUtils.toInputStream(data, UTF_8)));
 
     generateKeyPairFiles();
     byte[] sign = omCertClient.signDataStream(IOUtils.toInputStream(data,
-        UTF));
-    validateHash(sign, data.getBytes());
+        UTF_8));
+    validateHash(sign, data.getBytes(UTF_8));
   }
 
   /**
@@ -239,21 +238,22 @@
    */
   @Test
   public void verifySignatureStream() throws Exception {
-    String data = RandomStringUtils.random(500, UTF);
+    String data = RandomStringUtils.random(500);
     byte[] sign = omCertClient.signDataStream(IOUtils.toInputStream(data,
-        UTF));
+        UTF_8));
 
     // Positive tests.
-    assertTrue(omCertClient.verifySignature(data.getBytes(), sign,
+    assertTrue(omCertClient.verifySignature(data.getBytes(UTF_8), sign,
         x509Certificate));
-    assertTrue(omCertClient.verifySignature(IOUtils.toInputStream(data, UTF),
+    assertTrue(omCertClient.verifySignature(
+        IOUtils.toInputStream(data, UTF_8),
         sign, x509Certificate));
 
     // Negative tests.
-    assertFalse(omCertClient.verifySignature(data.getBytes(),
-        "abc".getBytes(), x509Certificate));
+    assertFalse(omCertClient.verifySignature(data.getBytes(UTF_8),
+        "abc".getBytes(UTF_8), x509Certificate));
     assertFalse(omCertClient.verifySignature(IOUtils.toInputStream(data,
-        UTF), "abc".getBytes(), x509Certificate));
+        UTF_8), "abc".getBytes(UTF_8), x509Certificate));
 
   }
 
@@ -262,20 +262,21 @@
    */
   @Test
   public void verifySignatureDataArray() throws Exception {
-    String data = RandomStringUtils.random(500, UTF);
-    byte[] sign = omCertClient.signData(data.getBytes());
+    String data = RandomStringUtils.random(500);
+    byte[] sign = omCertClient.signData(data.getBytes(UTF_8));
 
     // Positive tests.
-    assertTrue(omCertClient.verifySignature(data.getBytes(), sign,
+    assertTrue(omCertClient.verifySignature(data.getBytes(UTF_8), sign,
         x509Certificate));
-    assertTrue(omCertClient.verifySignature(IOUtils.toInputStream(data, UTF),
+    assertTrue(omCertClient.verifySignature(
+        IOUtils.toInputStream(data, UTF_8),
         sign, x509Certificate));
 
     // Negative tests.
-    assertFalse(omCertClient.verifySignature(data.getBytes(),
-        "abc".getBytes(), x509Certificate));
+    assertFalse(omCertClient.verifySignature(data.getBytes(UTF_8),
+        "abc".getBytes(UTF_8), x509Certificate));
     assertFalse(omCertClient.verifySignature(IOUtils.toInputStream(data,
-        UTF), "abc".getBytes(), x509Certificate));
+        UTF_8), "abc".getBytes(UTF_8), x509Certificate));
 
   }
 
diff --git a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestJsonUtils.java b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestJsonUtils.java
index b5452fb..c84eae5 100644
--- a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestJsonUtils.java
+++ b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestJsonUtils.java
@@ -38,7 +38,7 @@
 
     assertContains(result, "\"rawSize\" : 123");
     assertContains(result, "\"unit\" : \"MB\"");
-    assertContains(result, "\"quotaInCounts\" : 1000");
+    assertContains(result, "\"quotaInNamespace\" : 1000");
   }
 
   private static void assertContains(String str, String part) {
diff --git a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/http/TestRatisDropwizardExports.java b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/http/TestRatisDropwizardExports.java
index 25f1cef..906ff55 100644
--- a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/http/TestRatisDropwizardExports.java
+++ b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/http/TestRatisDropwizardExports.java
@@ -24,7 +24,10 @@
 import com.codahale.metrics.MetricRegistry;
 import io.prometheus.client.CollectorRegistry;
 import io.prometheus.client.exporter.common.TextFormat;
-import org.apache.ratis.server.metrics.RaftLogMetrics;
+import org.apache.ratis.protocol.RaftGroupId;
+import org.apache.ratis.protocol.RaftGroupMemberId;
+import org.apache.ratis.protocol.RaftPeerId;
+import org.apache.ratis.server.metrics.SegmentedRaftLogMetrics;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -36,7 +39,9 @@
   @Test
   public void export() throws IOException {
     //create Ratis metrics
-    RaftLogMetrics instance = new RaftLogMetrics("instance");
+    SegmentedRaftLogMetrics instance = new SegmentedRaftLogMetrics(
+        RaftGroupMemberId.valueOf(
+            RaftPeerId.valueOf("peerId"), RaftGroupId.randomId()));
     instance.getRaftLogSyncTimer().update(10, TimeUnit.MILLISECONDS);
     MetricRegistry dropWizardMetricRegistry =
         instance.getRegistry().getDropWizardMetricRegistry();
diff --git a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/utils/db/cache/TestTableCacheImpl.java b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/utils/db/cache/TestTableCache.java
similarity index 69%
rename from hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/utils/db/cache/TestTableCacheImpl.java
rename to hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/utils/db/cache/TestTableCache.java
index 891c065..07ed307 100644
--- a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/utils/db/cache/TestTableCacheImpl.java
+++ b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/utils/db/cache/TestTableCache.java
@@ -26,11 +26,13 @@
 import java.util.concurrent.CompletableFuture;
 
 import com.google.common.base.Optional;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
+import org.slf4j.event.Level;
 
 import static org.junit.Assert.fail;
 
@@ -38,31 +40,35 @@
  * Class tests partial table cache.
  */
 @RunWith(value = Parameterized.class)
-public class TestTableCacheImpl {
-  private TableCacheImpl<CacheKey<String>, CacheValue<String>> tableCache;
+public class TestTableCache {
+  private TableCache<CacheKey<String>, CacheValue<String>> tableCache;
 
-  private final TableCacheImpl.CacheCleanupPolicy cacheCleanupPolicy;
+  private final TableCache.CacheType cacheType;
 
 
   @Parameterized.Parameters
   public static Collection<Object[]> policy() {
     Object[][] params = new Object[][] {
-        {TableCacheImpl.CacheCleanupPolicy.NEVER},
-        {TableCacheImpl.CacheCleanupPolicy.MANUAL}
+        {TableCache.CacheType.FULL_CACHE},
+        {TableCache.CacheType.PARTIAL_CACHE}
     };
     return Arrays.asList(params);
   }
 
-  public TestTableCacheImpl(
-      TableCacheImpl.CacheCleanupPolicy cacheCleanupPolicy) {
-    this.cacheCleanupPolicy = cacheCleanupPolicy;
+  public TestTableCache(
+      TableCache.CacheType cacheType) {
+    GenericTestUtils.setLogLevel(FullTableCache.LOG, Level.DEBUG);
+    this.cacheType = cacheType;
   }
 
 
   @Before
   public void create() {
-    tableCache =
-        new TableCacheImpl<>(cacheCleanupPolicy);
+    if (cacheType == TableCache.CacheType.FULL_CACHE) {
+      tableCache = new FullTableCache<>();
+    } else {
+      tableCache = new PartialTableCache<>();
+    }
   }
   @Test
   public void testPartialTableCache() {
@@ -119,7 +125,7 @@
     final int count = totalCount;
 
     // If cleanup policy is manual entries should have been removed.
-    if (cacheCleanupPolicy == TableCacheImpl.CacheCleanupPolicy.MANUAL) {
+    if (cacheType == TableCache.CacheType.PARTIAL_CACHE) {
       Assert.assertEquals(count - epochs.size(), tableCache.size());
 
       // Check remaining entries exist or not and deleted entries does not
@@ -178,14 +184,13 @@
     epochs.add(3L);
     epochs.add(4L);
 
-    if (cacheCleanupPolicy == cacheCleanupPolicy.MANUAL) {
+    if (cacheType == TableCache.CacheType.PARTIAL_CACHE) {
 
       tableCache.evictCache(epochs);
 
       Assert.assertEquals(0, tableCache.size());
 
-      // Epoch entries which are overrided still exist.
-      Assert.assertEquals(2, tableCache.getEpochEntrySet().size());
+      Assert.assertEquals(0, tableCache.getEpochEntrySet().size());
     }
 
     // Add a new entry.
@@ -194,7 +199,7 @@
 
     epochs = new ArrayList<>();
     epochs.add(5L);
-    if (cacheCleanupPolicy == cacheCleanupPolicy.MANUAL) {
+    if (cacheType == TableCache.CacheType.PARTIAL_CACHE) {
       tableCache.evictCache(epochs);
 
       Assert.assertEquals(0, tableCache.size());
@@ -252,21 +257,19 @@
     epochs.add(6L);
 
 
-    if (cacheCleanupPolicy == cacheCleanupPolicy.MANUAL) {
+    if (cacheType == TableCache.CacheType.PARTIAL_CACHE) {
       tableCache.evictCache(epochs);
 
       Assert.assertEquals(0, tableCache.size());
 
-      // Epoch entries which are overrided still exist.
-      Assert.assertEquals(4, tableCache.getEpochEntrySet().size());
+      Assert.assertEquals(0, tableCache.getEpochEntrySet().size());
     } else {
       tableCache.evictCache(epochs);
 
       Assert.assertEquals(1, tableCache.size());
 
-      // Epoch entries which are overrided still exist and one not deleted As
-      // this cache clean up policy is NEVER.
-      Assert.assertEquals(5, tableCache.getEpochEntrySet().size());
+      // Epoch entries which are overrided also will be cleaned up.
+      Assert.assertEquals(0, tableCache.getEpochEntrySet().size());
     }
 
     // Add a new entry, now old override entries will be cleaned up.
@@ -276,7 +279,7 @@
     epochs = new ArrayList<>();
     epochs.add(7L);
 
-    if (cacheCleanupPolicy == cacheCleanupPolicy.MANUAL) {
+    if (cacheType == TableCache.CacheType.PARTIAL_CACHE) {
       tableCache.evictCache(epochs);
 
       Assert.assertEquals(0, tableCache.size());
@@ -289,9 +292,9 @@
       // 2 entries will be in cache, as 2 are not deleted.
       Assert.assertEquals(2, tableCache.size());
 
-      // Epoch entries which are not marked for delete will exist override
-      // entries will be cleaned up.
-      Assert.assertEquals(2, tableCache.getEpochEntrySet().size());
+      // Epoch entries which are not marked for delete will also be cleaned up.
+      // As they are override entries in full cache.
+      Assert.assertEquals(0, tableCache.getEpochEntrySet().size());
     }
 
 
@@ -337,7 +340,7 @@
 
     totalCount += value;
 
-    if (cacheCleanupPolicy == TableCacheImpl.CacheCleanupPolicy.MANUAL) {
+    if (cacheType == TableCache.CacheType.PARTIAL_CACHE) {
       int deleted = 5;
 
       // cleanup first 5 entires
@@ -380,6 +383,95 @@
 
   }
 
+  @Test
+  public void testTableCache() {
+
+    // In non-HA epoch entries might be out of order.
+    // Scenario is like create vol, set vol, set vol, delete vol
+    tableCache.put(new CacheKey<>(Long.toString(0)),
+        new CacheValue<>(Optional.of(Long.toString(0)), 0));
+    tableCache.put(new CacheKey<>(Long.toString(0)),
+        new CacheValue<>(Optional.of(Long.toString(1)), 1));
+    tableCache.put(new CacheKey<>(Long.toString(0)),
+        new CacheValue<>(Optional.of(Long.toString(2)), 3));
+
+    tableCache.put(new CacheKey<>(Long.toString(0)),
+        new CacheValue<>(Optional.absent(), 2));
+
+    List<Long> epochs = new ArrayList<>();
+    epochs.add(0L);
+    epochs.add(1L);
+    epochs.add(2L);
+    epochs.add(3L);
+
+    tableCache.evictCache(epochs);
+
+    Assert.assertTrue(tableCache.size() == 0);
+    Assert.assertTrue(tableCache.getEpochEntrySet().size() == 0);
+  }
+
+
+  @Test
+  public void testTableCacheWithNonConsecutiveEpochList() {
+
+    // In non-HA epoch entries might be out of order.
+    tableCache.put(new CacheKey<>(Long.toString(0)),
+        new CacheValue<>(Optional.of(Long.toString(0)), 0));
+    tableCache.put(new CacheKey<>(Long.toString(0)),
+        new CacheValue<>(Optional.of(Long.toString(1)), 1));
+    tableCache.put(new CacheKey<>(Long.toString(0)),
+        new CacheValue<>(Optional.of(Long.toString(3)), 3));
+
+    tableCache.put(new CacheKey<>(Long.toString(0)),
+          new CacheValue<>(Optional.of(Long.toString(2)), 2));
+
+    tableCache.put(new CacheKey<>(Long.toString(1)),
+        new CacheValue<>(Optional.of(Long.toString(1)), 4));
+
+    List<Long> epochs = new ArrayList<>();
+    epochs.add(0L);
+    epochs.add(1L);
+    epochs.add(3L);
+
+    tableCache.evictCache(epochs);
+
+    Assert.assertTrue(tableCache.size() == 2);
+    Assert.assertTrue(tableCache.getEpochEntrySet().size() == 2);
+
+    Assert.assertNotNull(tableCache.get(new CacheKey<>(Long.toString(0))));
+    Assert.assertEquals(2,
+        tableCache.get(new CacheKey<>(Long.toString(0))).getEpoch());
+
+    Assert.assertNotNull(tableCache.get(new CacheKey<>(Long.toString(1))));
+    Assert.assertEquals(4,
+        tableCache.get(new CacheKey<>(Long.toString(1))).getEpoch());
+
+    // now evict 2,4
+    epochs = new ArrayList<>();
+    epochs.add(2L);
+    epochs.add(4L);
+
+    tableCache.evictCache(epochs);
+
+    if(cacheType == TableCache.CacheType.PARTIAL_CACHE) {
+      Assert.assertTrue(tableCache.size() == 0);
+      Assert.assertTrue(tableCache.getEpochEntrySet().size() == 0);
+    } else {
+      Assert.assertTrue(tableCache.size() == 2);
+      Assert.assertTrue(tableCache.getEpochEntrySet().size() == 0);
+
+      // Entries should exist, as the entries are not delete entries
+      Assert.assertNotNull(tableCache.get(new CacheKey<>(Long.toString(0))));
+      Assert.assertEquals(2,
+          tableCache.get(new CacheKey<>(Long.toString(0))).getEpoch());
+
+      Assert.assertNotNull(tableCache.get(new CacheKey<>(Long.toString(1))));
+      Assert.assertEquals(4,
+          tableCache.get(new CacheKey<>(Long.toString(1))).getEpoch());
+    }
+
+  }
+
   private int writeToCache(int count, int startVal, long sleep)
       throws InterruptedException {
     int counter = 1;
diff --git a/hadoop-hdds/interface-admin/src/main/proto/ScmAdminProtocol.proto b/hadoop-hdds/interface-admin/src/main/proto/ScmAdminProtocol.proto
index 886b43c..1a85ebb 100644
--- a/hadoop-hdds/interface-admin/src/main/proto/ScmAdminProtocol.proto
+++ b/hadoop-hdds/interface-admin/src/main/proto/ScmAdminProtocol.proto
@@ -62,9 +62,12 @@
   optional GetPipelineRequestProto getPipelineRequest = 24;
   optional GetContainerWithPipelineBatchRequestProto getContainerWithPipelineBatchRequest = 25;
   optional GetSafeModeRuleStatusesRequestProto getSafeModeRuleStatusesRequest = 26;
-  optional FinalizeScmUpgradeRequestProto finalizeScmUpgradeRequest = 27;
+  optional DecommissionNodesRequestProto decommissionNodesRequest = 27;
+  optional RecommissionNodesRequestProto recommissionNodesRequest = 28;
+  optional StartMaintenanceNodesRequestProto startMaintenanceNodesRequest = 29;
+  optional FinalizeScmUpgradeRequestProto finalizeScmUpgradeRequest = 30;
   optional QueryUpgradeFinalizationProgressRequestProto
-  queryUpgradeFinalizationProgressRequest = 28;
+  queryUpgradeFinalizationProgressRequest = 31;
 }
 
 message ScmContainerLocationResponse {
@@ -99,9 +102,12 @@
   optional GetPipelineResponseProto getPipelineResponse = 24;
   optional GetContainerWithPipelineBatchResponseProto getContainerWithPipelineBatchResponse = 25;
   optional GetSafeModeRuleStatusesResponseProto getSafeModeRuleStatusesResponse = 26;
-  optional FinalizeScmUpgradeResponseProto finalizeScmUpgradeResponse = 27;
+  optional DecommissionNodesResponseProto decommissionNodesResponse = 27;
+  optional RecommissionNodesResponseProto recommissionNodesResponse = 28;
+  optional StartMaintenanceNodesResponseProto startMaintenanceNodesResponse = 29;
+  optional FinalizeScmUpgradeResponseProto finalizeScmUpgradeResponse = 30;
   optional QueryUpgradeFinalizationProgressResponseProto
-  queryUpgradeFinalizationProgressResponse = 28;
+  queryUpgradeFinalizationProgressResponse = 31;
   enum Status {
     OK = 1;
     CONTAINER_ALREADY_EXISTS = 2;
@@ -132,8 +138,11 @@
   GetPipeline = 19;
   GetContainerWithPipelineBatch = 20;
   GetSafeModeRuleStatuses = 21;
-  FinalizeScmUpgrade = 22;
-  QueryUpgradeFinalizationProgress = 23;
+  DecommissionNodes = 22;
+  RecommissionNodes = 23;
+  StartMaintenanceNodes = 24;
+  FinalizeScmUpgrade = 25;
+  QueryUpgradeFinalizationProgress = 26;
 }
 
 /**
@@ -237,16 +246,51 @@
  match the NodeState that we are requesting.
 */
 message NodeQueryRequestProto {
-  required NodeState state = 1;
+  optional NodeState state = 1;
   required QueryScope scope = 2;
   optional string poolName = 3; // if scope is pool, then pool name is needed.
   optional string traceID = 4;
+  optional NodeOperationalState opState = 5;
 }
 
 message NodeQueryResponseProto {
   repeated Node datanodes = 1;
 }
 
+/*
+  Decommission a list of hosts
+*/
+message DecommissionNodesRequestProto {
+  repeated string hosts = 1;
+}
+
+message DecommissionNodesResponseProto {
+  // empty response
+}
+
+/*
+  Recommission a list of hosts in maintenance or decommission states
+*/
+message RecommissionNodesRequestProto {
+  repeated string hosts = 1;
+}
+
+message RecommissionNodesResponseProto {
+  // empty response
+}
+
+/*
+  Place a list of hosts into maintenance mode
+*/
+message StartMaintenanceNodesRequestProto {
+  repeated string hosts = 1;
+  optional int64 endInHours = 2;
+}
+
+message StartMaintenanceNodesResponseProto {
+  // empty response
+}
+
 /**
   Request to create a replication pipeline.
  */
@@ -371,5 +415,4 @@
  */
 service StorageContainerLocationProtocolService {
   rpc submitRequest (ScmContainerLocationRequest) returns (ScmContainerLocationResponse);
-
 }
diff --git a/hadoop-hdds/interface-client/src/main/proto/hdds.proto b/hadoop-hdds/interface-client/src/main/proto/hdds.proto
index 3517731..afe8f1f 100644
--- a/hadoop-hdds/interface-client/src/main/proto/hdds.proto
+++ b/hadoop-hdds/interface-client/src/main/proto/hdds.proto
@@ -43,6 +43,8 @@
     // network name, can be Ip address or host name, depends
     optional string networkName = 6;
     optional string networkLocation = 7; // Network topology location
+    optional NodeOperationalState persistedOpState = 8; // The Operational state persisted in the datanode.id file
+    optional int64 persistedOpStateExpiry = 9; // The seconds after the epoch when the OpState should expire
     // TODO(runzhiwang): when uuid is gone, specify 1 as the index of uuid128 and mark as required
     optional UUID uuid128 = 100; // UUID with 128 bits assigned to the Datanode.
 }
@@ -129,9 +131,15 @@
     HEALTHY = 1;
     STALE = 2;
     DEAD = 3;
+    HEALTHY_READONLY = 6;
+}
+
+enum NodeOperationalState {
+    IN_SERVICE = 1;
+    ENTERING_MAINTENANCE = 2;
+    IN_MAINTENANCE = 3;
     DECOMMISSIONING = 4;
     DECOMMISSIONED = 5;
-    HEALTHY_READONLY = 6;
 }
 
 enum QueryScope {
@@ -142,6 +150,7 @@
 message Node {
     required DatanodeDetailsProto nodeID = 1;
     repeated NodeState nodeStates = 2;
+    repeated NodeOperationalState nodeOperationalStates = 3;
 }
 
 message NodePool {
diff --git a/hadoop-hdds/interface-client/src/main/resources/proto.lock b/hadoop-hdds/interface-client/src/main/resources/proto.lock
index 581ffaf..8bd3023 100644
--- a/hadoop-hdds/interface-client/src/main/resources/proto.lock
+++ b/hadoop-hdds/interface-client/src/main/resources/proto.lock
@@ -1292,14 +1292,6 @@
               {
                 "name": "DEAD",
                 "integer": 3
-              },
-              {
-                "name": "DECOMMISSIONING",
-                "integer": 4
-              },
-              {
-                "name": "DECOMMISSIONED",
-                "integer": 5
               }
             ]
           },
diff --git a/hadoop-hdds/interface-server/src/main/proto/ScmServerDatanodeHeartbeatProtocol.proto b/hadoop-hdds/interface-server/src/main/proto/ScmServerDatanodeHeartbeatProtocol.proto
index 9d0dbd2..f129c0d 100644
--- a/hadoop-hdds/interface-server/src/main/proto/ScmServerDatanodeHeartbeatProtocol.proto
+++ b/hadoop-hdds/interface-server/src/main/proto/ScmServerDatanodeHeartbeatProtocol.proto
@@ -304,7 +304,8 @@
     replicateContainerCommand = 5;
     createPipelineCommand = 6;
     closePipelineCommand = 7;
-    finalizeNewLayoutVersionCommand = 8;
+    setNodeOperationalStateCommand = 8;
+    finalizeNewLayoutVersionCommand = 9;
   }
   // TODO: once we start using protoc 3.x, refactor this message using "oneof"
   required Type commandType = 1;
@@ -315,8 +316,9 @@
   optional ReplicateContainerCommandProto replicateContainerCommandProto = 6;
   optional CreatePipelineCommandProto createPipelineCommandProto = 7;
   optional ClosePipelineCommandProto closePipelineCommandProto = 8;
+  optional SetNodeOperationalStateCommandProto setNodeOperationalStateCommandProto = 9;
   optional FinalizeNewLayoutVersionCommandProto
-  finalizeNewLayoutVersionCommandProto = 9;
+  finalizeNewLayoutVersionCommandProto = 10;
 }
 
 /**
@@ -405,6 +407,12 @@
   required int64 cmdId = 2;
 }
 
+message SetNodeOperationalStateCommandProto {
+  required  int64 cmdId = 1;
+  required  NodeOperationalState nodeOperationalState = 2;
+  required  int64 stateExpiryEpochSeconds = 3;
+}
+
 /**
  * This command asks the DataNode to finalize a new layout version.
  */
diff --git a/hadoop-hdds/interface-server/src/main/proto/ScmServerSecurityProtocol.proto b/hadoop-hdds/interface-server/src/main/proto/ScmServerSecurityProtocol.proto
index 72e0e9f..114d215 100644
--- a/hadoop-hdds/interface-server/src/main/proto/ScmServerSecurityProtocol.proto
+++ b/hadoop-hdds/interface-server/src/main/proto/ScmServerSecurityProtocol.proto
@@ -48,6 +48,7 @@
     optional SCMGetOMCertRequestProto getOMCertRequest = 4;
     optional SCMGetCertificateRequestProto getCertificateRequest = 5;
     optional SCMGetCACertificateRequestProto getCACertificateRequest = 6;
+    optional SCMListCertificateRequestProto listCertificateRequest = 7;
 
 }
 
@@ -66,6 +67,8 @@
 
     optional SCMGetCertResponseProto getCertResponseProto = 6;
 
+    optional SCMListCertificateResponseProto listCertificateResponseProto = 7;
+
 }
 
 enum Type {
@@ -73,6 +76,7 @@
     GetOMCertificate = 2;
     GetCertificate = 3;
     GetCACertificate = 4;
+    ListCertificate = 5;
 }
 
 enum Status {
@@ -110,6 +114,16 @@
 }
 
 /**
+* Proto request to list certificates by node type or all.
+*/
+message SCMListCertificateRequestProto {
+    optional NodeType role = 1;
+    optional int64 startCertId = 2;
+    required uint32 count = 3; // Max
+    optional bool isRevoked = 4; // list revoked certs
+}
+
+/**
  * Returns a certificate signed by SCM.
  */
 message SCMGetCertResponseProto {
@@ -123,6 +137,18 @@
     optional string x509CACertificate = 3; // Base64 encoded CA X509 certificate.
 }
 
+/**
+* Return a list of PEM encoded certificates.
+*/
+message SCMListCertificateResponseProto {
+    enum ResponseCode {
+        success = 1;
+        authenticationFailed = 2;
+    }
+    required ResponseCode responseCode = 1;
+    repeated string certificates = 2;
+}
+
 
 service SCMSecurityProtocolService {
     rpc submitRequest (SCMSecurityRequest) returns (SCMSecurityResponse);
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
index dfacae0..91b5494 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
@@ -24,12 +24,12 @@
 
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.container.placement.algorithms.ContainerPlacementStatusDefault;
 import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.net.NetworkTopology;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 
 import com.google.common.annotations.VisibleForTesting;
 import org.slf4j.Logger;
@@ -122,7 +122,7 @@
       List<DatanodeDetails> excludedNodes, List<DatanodeDetails> favoredNodes,
       int nodesRequired, final long sizeRequired) throws SCMException {
     List<DatanodeDetails> healthyNodes =
-        nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+        nodeManager.getNodes(NodeStatus.inServiceHealthy());
     if (excludedNodes != null) {
       healthyNodes.removeAll(excludedNodes);
     }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
index 014c76c..fae21b4 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
@@ -297,8 +297,10 @@
     // TODO: track the block size info so that we can reclaim the container
     // TODO: used space when the block is deleted.
     for (BlockGroup bg : keyBlocksInfoList) {
-      LOG.info("Deleting blocks {}",
-          StringUtils.join(",", bg.getBlockIDList()));
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Deleting blocks {}",
+            StringUtils.join(",", bg.getBlockIDList()));
+      }
       for (BlockID block : bg.getBlockIDList()) {
         long containerID = block.getContainerID();
         if (containerBlocks.containsKey(containerID)) {
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
index aa55480..ac53f2c 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/DeletedBlockLogImpl.java
@@ -18,11 +18,12 @@
 package org.apache.hadoop.hdds.scm.block;
 
 import java.io.IOException;
-import java.util.LinkedHashSet;
 import java.util.List;
-import java.util.Map;
-import java.util.Set;
 import java.util.UUID;
+import java.util.Set;
+import java.util.Map;
+import java.util.LinkedHashSet;
+import java.util.ArrayList;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.locks.Lock;
@@ -35,8 +36,9 @@
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerBlocksDeletionACKProto.DeleteBlockTransactionResult;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
 import org.apache.hadoop.hdds.scm.command.CommandStatusReportHandler.DeleteBlockStatus;
-import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
 import org.apache.hadoop.hdds.scm.container.ContainerManager;
 import org.apache.hadoop.hdds.scm.container.ContainerReplica;
 import org.apache.hadoop.hdds.scm.metadata.SCMMetadataStore;
@@ -129,9 +131,12 @@
         DeletedBlocksTransaction block =
             scmMetadataStore.getDeletedBlocksTXTable().get(txID);
         if (block == null) {
-          // Should we make this an error ? How can we not find the deleted
-          // TXID?
-          LOG.warn("Deleted TXID {} not found.", txID);
+          if (LOG.isDebugEnabled()) {
+            // This can occur due to race condition between retry and old
+            // service task where old task removes the transaction and the new
+            // task is resending
+            LOG.debug("Deleted TXID {} not found.", txID);
+          }
           continue;
         }
         DeletedBlocksTransaction.Builder builder = block.toBuilder();
@@ -196,9 +201,12 @@
               transactionResult.getContainerID());
           if (dnsWithCommittedTxn == null) {
             // Mostly likely it's a retried delete command response.
-            LOG.debug("Transaction txId={} commit by dnId={} for containerID={}"
-                    + " failed. Corresponding entry not found.", txID, dnID,
-                containerId);
+            if (LOG.isDebugEnabled()) {
+              LOG.debug(
+                  "Transaction txId={} commit by dnId={} for containerID={}"
+                      + " failed. Corresponding entry not found.", txID, dnID,
+                  containerId);
+            }
             continue;
           }
 
@@ -218,12 +226,16 @@
                 .collect(Collectors.toList());
             if (dnsWithCommittedTxn.containsAll(containerDns)) {
               transactionToDNsCommitMap.remove(txID);
-              LOG.debug("Purging txId={} from block deletion log", txID);
+              if (LOG.isDebugEnabled()) {
+                LOG.debug("Purging txId={} from block deletion log", txID);
+              }
               scmMetadataStore.getDeletedBlocksTXTable().delete(txID);
             }
           }
-          LOG.debug("Datanode txId={} containerId={} committed by dnId={}",
-              txID, containerId, dnID);
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("Datanode txId={} containerId={} committed by dnId={}",
+                txID, containerId, dnID);
+          }
         } catch (IOException e) {
           LOG.warn("Could not commit delete block transaction: " +
               transactionResult.getTxID(), e);
@@ -354,19 +366,27 @@
           ? extends Table.KeyValue<Long, DeletedBlocksTransaction>> iter =
                scmMetadataStore.getDeletedBlocksTXTable().iterator()) {
         int numBlocksAdded = 0;
+        List<DeletedBlocksTransaction> txnsToBePurged =
+            new ArrayList<>();
         while (iter.hasNext() && numBlocksAdded < blockDeletionLimit) {
-          Table.KeyValue<Long, DeletedBlocksTransaction> keyValue =
-              iter.next();
+          Table.KeyValue<Long, DeletedBlocksTransaction> keyValue = iter.next();
           DeletedBlocksTransaction txn = keyValue.getValue();
           final ContainerID id = ContainerID.valueof(txn.getContainerID());
-          if (txn.getCount() > -1 && txn.getCount() <= maxRetry
-              && !containerManager.getContainer(id).isOpen()) {
-            numBlocksAdded += txn.getLocalIDCount();
-            getTransaction(txn, transactions);
-            transactionToDNsCommitMap
-                .putIfAbsent(txn.getTxID(), new LinkedHashSet<>());
+          try {
+            if (txn.getCount() > -1 && txn.getCount() <= maxRetry
+                && !containerManager.getContainer(id).isOpen()) {
+              numBlocksAdded += txn.getLocalIDCount();
+              getTransaction(txn, transactions);
+              transactionToDNsCommitMap
+                  .putIfAbsent(txn.getTxID(), new LinkedHashSet<>());
+            }
+          } catch (ContainerNotFoundException ex) {
+            LOG.warn("Container: " + id + " was not found for the transaction: "
+                + txn);
+            txnsToBePurged.add(txn);
           }
         }
+        purgeTransactions(txnsToBePurged);
       }
       return transactions;
     } finally {
@@ -374,6 +394,18 @@
     }
   }
 
+  public void purgeTransactions(List<DeletedBlocksTransaction> txnsToBePurged)
+      throws IOException {
+    try (BatchOperation batch = scmMetadataStore.getBatchHandler()
+        .initBatchOperation()) {
+      for (int i = 0; i < txnsToBePurged.size(); i++) {
+        scmMetadataStore.getDeletedBlocksTXTable()
+            .deleteWithBatch(batch, txnsToBePurged.get(i).getTxID());
+      }
+      scmMetadataStore.getBatchHandler().commitBatchOperation(batch);
+    }
+  }
+
   @Override
   public void onMessage(DeleteBlockStatus deleteBlockStatus,
                         EventPublisher publisher) {
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
index fbf5654..ceeaa10 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
@@ -25,12 +25,12 @@
 
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
 import org.apache.hadoop.hdds.scm.ScmConfig;
 import org.apache.hadoop.hdds.scm.container.ContainerManager;
 import org.apache.hadoop.hdds.scm.events.SCMEvents;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
 import org.apache.hadoop.hdds.utils.BackgroundService;
 import org.apache.hadoop.hdds.utils.BackgroundTask;
@@ -116,11 +116,14 @@
       long startTime = Time.monotonicNow();
       // Scan SCM DB in HB interval and collect a throttled list of
       // to delete blocks.
+
       if (LOG.isDebugEnabled()) {
         LOG.debug("Running DeletedBlockTransactionScanner");
       }
-
-      List<DatanodeDetails> datanodes = nodeManager.getNodes(NodeState.HEALTHY);
+      // TODO - DECOMM - should we be deleting blocks from decom nodes
+      //        and what about entering maintenance.
+      List<DatanodeDetails> datanodes =
+          nodeManager.getNodes(NodeStatus.inServiceHealthy());
       if (datanodes != null) {
         try {
           DatanodeDeletedBlockTransactions transactions =
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReplicaCount.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReplicaCount.java
new file mode 100644
index 0000000..bf8c3b9
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReplicaCount.java
@@ -0,0 +1,271 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import java.util.Set;
+
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.DECOMMISSIONED;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.DECOMMISSIONING;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.IN_MAINTENANCE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.IN_SERVICE;
+
+/**
+ * Immutable object that is created with a set of ContainerReplica objects and
+ * the number of in flight replica add and deletes, the container replication
+ * factor and the min count which must be available for maintenance. This
+ * information can be used to determine if the container is over or under
+ * replicated and also how many additional replicas need created or removed.
+ */
+public class ContainerReplicaCount {
+
+  private int healthyCount = 0;
+  private int decommissionCount = 0;
+  private int maintenanceCount = 0;
+  private int inFlightAdd = 0;
+  private int inFlightDel = 0;
+  private int repFactor;
+  private int minHealthyForMaintenance;
+  private ContainerInfo container;
+  private Set<ContainerReplica> replica;
+
+  public ContainerReplicaCount(ContainerInfo container,
+                               Set<ContainerReplica> replica, int inFlightAdd,
+                               int inFlightDelete, int replicationFactor,
+                               int minHealthyForMaintenance) {
+    this.healthyCount = 0;
+    this.decommissionCount = 0;
+    this.maintenanceCount = 0;
+    this.inFlightAdd = inFlightAdd;
+    this.inFlightDel = inFlightDelete;
+    this.repFactor = replicationFactor;
+    this.replica = replica;
+    this.minHealthyForMaintenance
+        = Math.min(this.repFactor, minHealthyForMaintenance);
+    this.container = container;
+
+    for (ContainerReplica cr : this.replica) {
+      HddsProtos.NodeOperationalState state =
+          cr.getDatanodeDetails().getPersistedOpState();
+      if (state == DECOMMISSIONED || state == DECOMMISSIONING) {
+        decommissionCount++;
+      } else if (state == IN_MAINTENANCE || state == ENTERING_MAINTENANCE) {
+        maintenanceCount++;
+      } else {
+        healthyCount++;
+      }
+    }
+  }
+
+  public int getHealthyCount() {
+    return healthyCount;
+  }
+
+  public int getDecommissionCount() {
+    return decommissionCount;
+  }
+
+  public int getMaintenanceCount() {
+    return maintenanceCount;
+  }
+
+  public int getReplicationFactor() {
+    return repFactor;
+  }
+
+  public ContainerInfo getContainer() {
+    return container;
+  }
+
+  public Set<ContainerReplica> getReplica() {
+    return replica;
+  }
+
+  @Override
+  public String toString() {
+    return "Container State: " +container.getState()+
+        " Replica Count: "+replica.size()+
+        " Healthy Count: "+healthyCount+
+        " Decommission Count: "+decommissionCount+
+        " Maintenance Count: "+maintenanceCount+
+        " inFlightAdd Count: "+inFlightAdd+
+        " inFightDel Count: "+inFlightDel+
+        " ReplicationFactor: "+repFactor+
+        " minMaintenance Count: "+minHealthyForMaintenance;
+  }
+
+  /**
+   * Calculates the the delta of replicas which need to be created or removed
+   * to ensure the container is correctly replicated when considered inflight
+   * adds and deletes.
+   *
+   * When considering inflight operations, it is assumed any operation will
+   * fail. However, to consider the worst case and avoid data loss, we always
+   * assume a delete will succeed and and add will fail. In this way, we will
+   * avoid scheduling too many deletes which could result in dataloss.
+   *
+   * Decisions around over-replication are made only on healthy replicas,
+   * ignoring any in maintenance and also any inflight adds. InFlight adds are
+   * ignored, as they may not complete, so if we have:
+   *
+   *     H, H, H, IN_FLIGHT_ADD
+   *
+   * And then schedule a delete, we could end up under-replicated (add fails,
+   * delete completes). It is better to let the inflight operations complete
+   * and then deal with any further over or under replication.
+   *
+   * For maintenance replicas, assuming replication factor 3, and minHealthy
+   * 2, it is possible for all 3 hosts to be put into maintenance, leaving the
+   * following (H = healthy, M = maintenance):
+   *
+   *     H, H, M, M, M
+   *
+   * Even though we are tracking 5 replicas, this is not over replicated as we
+   * ignore the maintenance copies. Later, the replicas could look like:
+   *
+   *     H, H, H, H, M
+   *
+   * At this stage, the container is over replicated by 1, so one replica can be
+   * removed.
+   *
+   * For containers which have replication factor healthy replica, we ignore any
+   * inflight add or deletes, as they may fail. Instead, wait for them to
+   * complete and then deal with any excess or deficit.
+   *
+   * For under replicated containers we do consider inflight add and delete to
+   * avoid scheduling more adds than needed. There is additional logic around
+   * containers with maintenance replica to ensure minHealthyForMaintenance
+   * replia are maintained.
+   *
+   * @return Delta of replicas needed. Negative indicates over replication and
+   *         containers should be removed. Positive indicates over replication
+   *         and zero indicates the containers has replicationFactor healthy
+   *         replica
+   */
+  public int additionalReplicaNeeded() {
+    int delta = missingReplicas();
+
+    if (delta < 0) {
+      // Over replicated, so may need to remove a container. Do not consider
+      // inFlightAdds, as they may fail, but do consider inFlightDel which
+      // will reduce the over-replication if it completes.
+      // Note this could make the delta positive if there are too many in flight
+      // deletes, which will result in an additional being scheduled.
+      return delta + inFlightDel;
+    } else {
+      // May be under or perfectly replicated.
+      // We must consider in flight add and delete when calculating the new
+      // containers needed, but we bound the lower limit at zero to allow
+      // inflight operations to complete before handling any potential over
+      // replication
+      return Math.max(0, delta - inFlightAdd + inFlightDel);
+    }
+  }
+
+  /**
+   * Returns the count of replicas which need to be created or removed to
+   * ensure the container is perfectly replicate. Inflight operations are not
+   * considered here, but the logic to determine the missing or excess counts
+   * for maintenance is present.
+   *
+   * Decisions around over-replication are made only on healthy replicas,
+   * ignoring any in maintenance. For example, if we have:
+   *
+   *     H, H, H, M, M
+   *
+   * This will not be consider over replicated until one of the Maintenance
+   * replicas moves to Healthy.
+   *
+   * If the container is perfectly replicated, zero will be return.
+   *
+   * If it is under replicated a positive value will be returned, indicating
+   * how many replicas must be added.
+   *
+   * If it is over replicated a negative value will be returned, indicating now
+   * many replicas to remove.
+   *
+   * @return Zero if the container is perfectly replicated, a positive value
+   *         for under replicated and a negative value for over replicated.
+   */
+  private int missingReplicas() {
+    int delta = repFactor - healthyCount;
+
+    if (delta < 0) {
+      // Over replicated, so may need to remove a container.
+      return delta;
+    } else if (delta > 0) {
+      // May be under-replicated, depending on maintenance.
+      delta = Math.max(0, delta - maintenanceCount);
+      int neededHealthy =
+          Math.max(0, minHealthyForMaintenance - healthyCount);
+      delta = Math.max(neededHealthy, delta);
+      return delta;
+    } else { // delta == 0
+      // We have exactly the number of healthy replicas needed.
+      return delta;
+    }
+  }
+
+  /**
+   * Return true if the container is sufficiently replicated. Decommissioning
+   * and Decommissioned containers are ignored in this check, assuming they will
+   * eventually be removed from the cluster.
+   * This check ignores inflight additions, as those replicas have not yet been
+   * created and the create could fail for some reason.
+   * The check does consider inflight deletes as there may be 3 healthy replicas
+   * now, but once the delete completes it will reduce to 2.
+   * We also assume a replica in Maintenance state cannot be removed, so the
+   * pending delete would affect only the healthy replica count.
+   *
+   * @return True if the container is sufficiently replicated and False
+   *         otherwise.
+   */
+  public boolean isSufficientlyReplicated() {
+    return missingReplicas() + inFlightDel <= 0;
+  }
+
+  /**
+   * Return true is the container is over replicated. Decommission and
+   * maintenance containers are ignored for this check.
+   * The check ignores inflight additions, as they may fail, but it does
+   * consider inflight deletes, as they would reduce the over replication when
+   * they complete.
+   *
+   * @return True if the container is over replicated, false otherwise.
+   */
+  public boolean isOverReplicated() {
+    return missingReplicas() + inFlightDel < 0;
+  }
+
+  /**
+   * Returns true if the container is healthy, meaning all replica which are not
+   * in a decommission or maintenance state are in the same state as the
+   * container and in QUASI_CLOSED or in CLOSED state.
+   *
+   * @return true if the container is healthy, false otherwise
+   */
+  public boolean isHealthy() {
+    return (container.getState() == HddsProtos.LifeCycleState.CLOSED
+        || container.getState() == HddsProtos.LifeCycleState.QUASI_CLOSED)
+        && replica.stream()
+        .filter(r -> r.getDatanodeDetails().getPersistedOpState() == IN_SERVICE)
+        .allMatch(r -> ReplicationManager.compareState(
+            container.getState(), r.getState()));
+  }
+}
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
index ed6924c..bde4c35 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
@@ -47,6 +47,8 @@
 import org.apache.hadoop.hdds.scm.PlacementPolicy;
 import org.apache.hadoop.hdds.scm.events.SCMEvents;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.scm.safemode.SCMSafeModeManager.SafeModeStatus;
 import org.apache.hadoop.hdds.server.events.EventHandler;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
@@ -107,6 +109,11 @@
   private final LockManager<ContainerID> lockManager;
 
   /**
+   * Used to lookup the health of a nodes or the nodes operational state.
+   */
+  private final NodeManager nodeManager;
+
+  /**
    * This is used for tracking container replication commands which are issued
    * by ReplicationManager and not yet complete.
    */
@@ -136,9 +143,9 @@
   private volatile boolean running;
 
   /**
-   * Used for check datanode state.
+   * Minimum number of replica in a healthy state for maintenance.
    */
-  private final NodeManager nodeManager;
+  private int minHealthyForMaintenance;
 
   /**
    * Constructs ReplicationManager instance with the given configuration.
@@ -158,11 +165,12 @@
     this.containerPlacement = containerPlacement;
     this.eventPublisher = eventPublisher;
     this.lockManager = lockManager;
+    this.nodeManager = nodeManager;
     this.conf = conf;
     this.running = false;
     this.inflightReplication = new ConcurrentHashMap<>();
     this.inflightDeletion = new ConcurrentHashMap<>();
-    this.nodeManager = nodeManager;
+    this.minHealthyForMaintenance = conf.getMaintenanceReplicaMinimum();
   }
 
   /**
@@ -258,7 +266,7 @@
    * @param id ContainerID
    */
   private void processContainer(ContainerID id) {
-    lockManager.lock(id);
+    lockManager.writeLock(id);
     try {
       final ContainerInfo container = containerManager.getContainer(id);
       final Set<ContainerReplica> replicas = containerManager
@@ -271,7 +279,7 @@
        * the replicas are not in OPEN state, send CLOSE_CONTAINER command.
        */
       if (state == LifeCycleState.OPEN) {
-        if (!isContainerHealthy(container, replicas)) {
+        if (!isOpenContainerHealthy(container, replicas)) {
           eventPublisher.fireEvent(SCMEvents.CLOSE_CONTAINER, id);
         }
         return;
@@ -323,6 +331,19 @@
         return;
       }
 
+      /**
+       * We don't need to take any action for a DELETE container - eventually
+       * it will be removed from SCM.
+       */
+      if (state == LifeCycleState.DELETED) {
+        return;
+      }
+
+      ContainerReplicaCount replicaSet =
+          getContainerReplicaCount(container, replicas);
+      ContainerPlacementStatus placementStatus = getPlacementStatus(
+          replicas, container.getReplicationFactor().getNumber());
+
       /*
        * We don't have to take any action if the container is healthy.
        *
@@ -330,13 +351,11 @@
        * the container is either in QUASI_CLOSED or in CLOSED state and has
        * exact number of replicas in the same state.
        */
-      if (isContainerHealthy(container, replicas)) {
+      if (isContainerEmpty(container, replicas)) {
         /*
          *  If container is empty, schedule task to delete the container.
          */
-        if (isContainerEmpty(container, replicas)) {
-          deleteContainerReplicas(container, replicas);
-        }
+        deleteContainerReplicas(container, replicas);
         return;
       }
 
@@ -344,8 +363,9 @@
        * Check if the container is under replicated and take appropriate
        * action.
        */
-      if (isContainerUnderReplicated(container, replicas)) {
-        handleUnderReplicatedContainer(container, replicas);
+      if (!replicaSet.isSufficientlyReplicated()
+          || !placementStatus.isPolicySatisfied()) {
+        handleUnderReplicatedContainer(container, replicaSet, placementStatus);
         return;
       }
 
@@ -353,24 +373,26 @@
        * Check if the container is over replicated and take appropriate
        * action.
        */
-      if (isContainerOverReplicated(container, replicas)) {
-        handleOverReplicatedContainer(container, replicas);
+      if (replicaSet.isOverReplicated()) {
+        handleOverReplicatedContainer(container, replicaSet);
         return;
       }
 
       /*
-       * The container is neither under nor over replicated and the container
-       * is not healthy. This means that the container has unhealthy/corrupted
-       * replica.
+       If we get here, the container is not over replicated or under replicated
+       but it may be "unhealthy", which means it has one or more replica which
+       are not in the same state as the container itself.
        */
-      handleUnstableContainer(container, replicas);
+      if (!replicaSet.isHealthy()) {
+        handleUnstableContainer(container, replicas);
+      }
 
     } catch (ContainerNotFoundException ex) {
       LOG.warn("Missing container {}.", id);
     } catch (Exception ex) {
       LOG.warn("Process container {} error: ", id, ex);
     } finally {
-      lockManager.unlock(id);
+      lockManager.writeUnlock(id);
     }
   }
 
@@ -389,10 +411,22 @@
     if (inflightActions.containsKey(id)) {
       final List<InflightAction> actions = inflightActions.get(id);
 
-      actions.removeIf(action ->
-          nodeManager.getNodeState(action.datanode) != NodeState.HEALTHY);
-      actions.removeIf(action -> action.time < deadline);
-      actions.removeIf(filter);
+      Iterator<InflightAction> iter = actions.iterator();
+      while(iter.hasNext()) {
+        try {
+          InflightAction a = iter.next();
+          NodeState health = nodeManager.getNodeStatus(a.datanode)
+              .getHealth();
+          if (health != NodeState.HEALTHY || a.time < deadline
+              || filter.test(a)) {
+            iter.remove();
+          }
+        } catch (NodeNotFoundException e) {
+          // Should not happen, but if it does, just remove the action as the
+          // node somehow does not exist;
+          iter.remove();
+        }
+      }
       if (actions.isEmpty()) {
         inflightActions.remove(id);
       }
@@ -400,21 +434,23 @@
   }
 
   /**
-   * Returns true if the container is healthy according to ReplicationMonitor.
-   *
-   * According to ReplicationMonitor container is considered healthy if
-   * it has exact number of replicas in the same state as the container.
-   *
-   * @param container Container to check
-   * @param replicas Set of ContainerReplicas
-   * @return true if the container is healthy, false otherwise
+   * Returns the number replica which are pending creation for the given
+   * container ID.
+   * @param id The ContainerID for which to check the pending replica
+   * @return The number of inflight additions or zero if none
    */
-  private boolean isContainerHealthy(final ContainerInfo container,
-                                     final Set<ContainerReplica> replicas) {
-    return !isContainerUnderReplicated(container, replicas) &&
-        !isContainerOverReplicated(container, replicas) &&
-        replicas.stream().allMatch(
-            r -> compareState(container.getState(), r.getState()));
+  private int getInflightAdd(final ContainerID id) {
+    return inflightReplication.getOrDefault(id, Collections.emptyList()).size();
+  }
+
+  /**
+   * Returns the number replica which are pending delete for the given
+   * container ID.
+   * @param id The ContainerID for which to check the pending replica
+   * @return The number of inflight deletes or zero if none
+   */
+  private int getInflightDel(final ContainerID id) {
+    return inflightDeletion.getOrDefault(id, Collections.emptyList()).size();
   }
 
   /**
@@ -433,51 +469,61 @@
   }
 
   /**
-   * Checks if the container is under replicated or not.
-   *
-   * @param container Container to check
-   * @param replicas Set of ContainerReplicas
-   * @return true if the container is under replicated, false otherwise
+   * Given a ContainerID, lookup the ContainerInfo and then return a
+   * ContainerReplicaCount object for the container.
+   * @param containerID The ID of the container
+   * @return ContainerReplicaCount for the given container
+   * @throws ContainerNotFoundException
    */
-  private boolean isContainerUnderReplicated(final ContainerInfo container,
-      final Set<ContainerReplica> replicas) {
-    if (container.getState() == LifeCycleState.DELETING ||
-        container.getState() == LifeCycleState.DELETED) {
-      return false;
+  public ContainerReplicaCount getContainerReplicaCount(ContainerID containerID)
+      throws ContainerNotFoundException {
+    ContainerInfo container = containerManager.getContainer(containerID);
+    return getContainerReplicaCount(container);
+  }
+
+  /**
+   * Given a container, obtain the set of known replica for it, and return a
+   * ContainerReplicaCount object. This object will contain the set of replica
+   * as well as all information required to determine if the container is over
+   * or under replicated, including the delta of replica required to repair the
+   * over or under replication.
+   *
+   * @param container The container to create a ContainerReplicaCount for
+   * @return ContainerReplicaCount representing the replicated state of the
+   *         container.
+   * @throws ContainerNotFoundException
+   */
+  public ContainerReplicaCount getContainerReplicaCount(ContainerInfo container)
+      throws ContainerNotFoundException {
+    lockManager.readLock(container.containerID());
+    try {
+      final Set<ContainerReplica> replica = containerManager
+          .getContainerReplicas(container.containerID());
+      return getContainerReplicaCount(container, replica);
+    } finally {
+      lockManager.readUnlock(container.containerID());
     }
-    boolean misReplicated = !getPlacementStatus(
-        replicas, container.getReplicationFactor().getNumber())
-        .isPolicySatisfied();
-    return container.getReplicationFactor().getNumber() >
-        getReplicaCount(container.containerID(), replicas) || misReplicated;
   }
 
   /**
-   * Checks if the container is over replicated or not.
+   * Given a container and its set of replicas, create and return a
+   * ContainerReplicaCount representing the container.
    *
-   * @param container Container to check
-   * @param replicas Set of ContainerReplicas
-   * @return true if the container if over replicated, false otherwise
+   * @param container The container for which to construct a
+   *                  ContainerReplicaCount
+   * @param replica The set of existing replica for this container
+   * @return ContainerReplicaCount representing the current state of the
+   *         container
    */
-  private boolean isContainerOverReplicated(final ContainerInfo container,
-      final Set<ContainerReplica> replicas) {
-    return container.getReplicationFactor().getNumber() <
-        getReplicaCount(container.containerID(), replicas);
-  }
-
-  /**
-   * Returns the replication count of the given container. This also
-   * considers inflight replication and deletion.
-   *
-   * @param id ContainerID
-   * @param replicas Set of existing replicas
-   * @return number of estimated replicas for this container
-   */
-  private int getReplicaCount(final ContainerID id,
-                              final Set<ContainerReplica> replicas) {
-    return replicas.size()
-        + inflightReplication.getOrDefault(id, Collections.emptyList()).size()
-        - inflightDeletion.getOrDefault(id, Collections.emptyList()).size();
+  private ContainerReplicaCount getContainerReplicaCount(
+      ContainerInfo container, Set<ContainerReplica> replica) {
+    return new ContainerReplicaCount(
+        container,
+        replica,
+        getInflightAdd(container.containerID()),
+        getInflightDel(container.containerID()),
+        container.getReplicationFactor().getNumber(),
+        minHealthyForMaintenance);
   }
 
   /**
@@ -601,13 +647,25 @@
    * and send replicate container command to the identified datanode(s).
    *
    * @param container ContainerInfo
-   * @param replicas Set of ContainerReplicas
+   * @param replicaSet An instance of ContainerReplicaCount, containing the
+   *                   current replica count and inflight adds and deletes
    */
   private void handleUnderReplicatedContainer(final ContainerInfo container,
-      final Set<ContainerReplica> replicas) {
+      final ContainerReplicaCount replicaSet,
+      final ContainerPlacementStatus placementStatus) {
     LOG.debug("Handling under-replicated container: {}",
         container.getContainerID());
+    Set<ContainerReplica> replicas = replicaSet.getReplica();
     try {
+
+      if (replicaSet.isSufficientlyReplicated()
+          && placementStatus.isPolicySatisfied()) {
+        LOG.info("The container {} with replicas {} is sufficiently "+
+            "replicated and is not mis-replicated",
+            container.getContainerID(), replicaSet);
+        return;
+      }
+      int repDelta = replicaSet.additionalReplicaNeeded();
       final ContainerID id = container.containerID();
       final List<DatanodeDetails> deletionInFlight = inflightDeletion
           .getOrDefault(id, Collections.emptyList())
@@ -623,6 +681,11 @@
           .filter(r ->
               r.getState() == State.QUASI_CLOSED ||
               r.getState() == State.CLOSED)
+          // Exclude stale and dead nodes. This is particularly important for
+          // maintenance nodes, as the replicas will remain present in the
+          // container manager, even when they go dead.
+          .filter(r ->
+              getNodeStatus(r.getDatanodeDetails()).isHealthy())
           .filter(r -> !deletionInFlight.contains(r.getDatanodeDetails()))
           .sorted((r1, r2) -> r2.getSequenceId().compareTo(r1.getSequenceId()))
           .map(ContainerReplica::getDatanodeDetails)
@@ -636,13 +699,12 @@
         List<DatanodeDetails> targetReplicas = new ArrayList<>(source);
         // Then add any pending additions
         targetReplicas.addAll(replicationInFlight);
-        final ContainerPlacementStatus placementStatus =
+        final ContainerPlacementStatus inFlightplacementStatus =
             containerPlacement.validateContainerPlacement(
                 targetReplicas, replicationFactor);
-        int delta = replicationFactor - getReplicaCount(id, replicas);
-        final int misRepDelta = placementStatus.misReplicationCount();
+        final int misRepDelta = inFlightplacementStatus.misReplicationCount();
         final int replicasNeeded
-            = delta < misRepDelta ? misRepDelta : delta;
+            = repDelta < misRepDelta ? misRepDelta : repDelta;
         if (replicasNeeded <= 0) {
           LOG.debug("Container {} meets replication requirement with " +
               "inflight replicas", id);
@@ -656,10 +718,10 @@
         final List<DatanodeDetails> selectedDatanodes = containerPlacement
             .chooseDatanodes(excludeList, null, replicasNeeded,
                 container.getUsedBytes());
-        if (delta > 0) {
+        if (repDelta > 0) {
           LOG.info("Container {} is under replicated. Expected replica count" +
                   " is {}, but found {}.", id, replicationFactor,
-              replicationFactor - delta);
+              replicationFactor - repDelta);
         }
         int newMisRepDelta = misRepDelta;
         if (misRepDelta > 0) {
@@ -671,7 +733,7 @@
           newMisRepDelta = containerPlacement.validateContainerPlacement(
               targetReplicas, replicationFactor).misReplicationCount();
         }
-        if (delta > 0 || newMisRepDelta < misRepDelta) {
+        if (repDelta > 0 || newMisRepDelta < misRepDelta) {
           // Only create new replicas if we are missing a replicas or
           // the number of pending mis-replication has improved. No point in
           // creating new replicas for mis-replicated containers unless it
@@ -689,7 +751,7 @@
         LOG.warn("Cannot replicate container {}, no healthy replica found.",
             container.containerID());
       }
-    } catch (IOException ex) {
+    } catch (IOException | IllegalStateException ex) {
       LOG.warn("Exception while replicating container {}.",
           container.getContainerID(), ex);
     }
@@ -701,17 +763,16 @@
    * identified datanode(s).
    *
    * @param container ContainerInfo
-   * @param replicas Set of ContainerReplicas
+   * @param replicaSet An instance of ContainerReplicaCount, containing the
+   *                   current replica count and inflight adds and deletes
    */
   private void handleOverReplicatedContainer(final ContainerInfo container,
-      final Set<ContainerReplica> replicas) {
+      final ContainerReplicaCount replicaSet) {
 
+    final Set<ContainerReplica> replicas = replicaSet.getReplica();
     final ContainerID id = container.containerID();
     final int replicationFactor = container.getReplicationFactor().getNumber();
-    // Don't consider inflight replication while calculating excess here.
-    int excess = replicas.size() - replicationFactor -
-        inflightDeletion.getOrDefault(id, Collections.emptyList()).size();
-
+    int excess = replicaSet.additionalReplicaNeeded() * -1;
     if (excess > 0) {
 
       LOG.info("Container {} is over replicated. Expected replica count" +
@@ -729,9 +790,14 @@
             .forEach(r -> uniqueReplicas
                 .putIfAbsent(r.getOriginDatanodeId(), r));
 
-        // Retain one healthy replica per origin node Id.
         eligibleReplicas.removeAll(uniqueReplicas.values());
       }
+      // Replica which are maintenance or decommissioned are not eligible to
+      // be removed, as they do not count toward over-replication and they
+      // also many not be available
+      eligibleReplicas.removeIf(r ->
+          r.getDatanodeDetails().getPersistedOpState() !=
+              HddsProtos.NodeOperationalState.IN_SERVICE);
 
       final List<ContainerReplica> unhealthyReplicas = eligibleReplicas
           .stream()
@@ -757,18 +823,18 @@
       // make the container become mis-replicated.
       if (excess > 0) {
         eligibleReplicas.removeAll(unhealthyReplicas);
-        Set<ContainerReplica> replicaSet = new HashSet<>(eligibleReplicas);
+        Set<ContainerReplica> eligibleSet = new HashSet<>(eligibleReplicas);
         ContainerPlacementStatus ps =
-            getPlacementStatus(replicaSet, replicationFactor);
+            getPlacementStatus(eligibleSet, replicationFactor);
         for (ContainerReplica r : eligibleReplicas) {
           if (excess <= 0) {
             break;
           }
           // First remove the replica we are working on from the set, and then
           // check if the set is now mis-replicated.
-          replicaSet.remove(r);
+          eligibleSet.remove(r);
           ContainerPlacementStatus nowPS =
-              getPlacementStatus(replicaSet, replicationFactor);
+              getPlacementStatus(eligibleSet, replicationFactor);
           if ((!ps.isPolicySatisfied()
                 && nowPS.actualPlacementCount() == ps.actualPlacementCount())
               || (ps.isPolicySatisfied() && nowPS.isPolicySatisfied())) {
@@ -780,7 +846,7 @@
             continue;
           }
           // If we decided not to remove this replica, put it back into the set
-          replicaSet.add(r);
+          eligibleSet.add(r);
         }
         if (excess > 0) {
           LOG.info("The container {} is over replicated with {} excess " +
@@ -957,13 +1023,27 @@
   }
 
   /**
+   * Wrap the call to nodeManager.getNodeStatus, catching any
+   * NodeNotFoundException and instead throwing an IllegalStateException.
+   * @param dn The datanodeDetails to obtain the NodeStatus for
+   * @return NodeStatus corresponding to the given Datanode.
+   */
+  private NodeStatus getNodeStatus(DatanodeDetails dn) {
+    try {
+      return nodeManager.getNodeStatus(dn);
+    } catch (NodeNotFoundException e) {
+      throw new IllegalStateException("Unable to find NodeStatus for "+dn, e);
+    }
+  }
+
+  /**
    * Compares the container state with the replica state.
    *
    * @param containerState ContainerState
    * @param replicaState ReplicaState
    * @return true if the state matches, false otherwise
    */
-  private static boolean compareState(final LifeCycleState containerState,
+  public static boolean compareState(final LifeCycleState containerState,
                                       final State replicaState) {
     switch (containerState) {
     case OPEN:
@@ -983,6 +1063,20 @@
     }
   }
 
+  /**
+   * An open container is healthy if all its replicas are in the same state as
+   * the container.
+   * @param container The container to check
+   * @param replicas The replicas belonging to the container
+   * @return True if the container is healthy, false otherwise
+   */
+  private boolean isOpenContainerHealthy(
+      ContainerInfo container, Set<ContainerReplica> replicas) {
+    LifeCycleState state = container.getState();
+    return replicas.stream()
+        .allMatch(r -> ReplicationManager.compareState(state, r.getState()));
+  }
+
   @Override
   public void getMetrics(MetricsCollector collector, boolean all) {
     collector.addRecord(ReplicationManager.class.getSimpleName())
@@ -1047,7 +1141,6 @@
             + "sent  to datanodes. After this timeout the command will be "
             + "retried.")
     private long eventTimeout = Duration.ofMinutes(30).toMillis();
-
     public void setInterval(Duration interval) {
       this.interval = interval.toMillis();
     }
@@ -1056,6 +1149,25 @@
       this.eventTimeout = timeout.toMillis();
     }
 
+    /**
+     * The number of container replica which must be available for a node to
+     * enter maintenance.
+     */
+    @Config(key = "maintenance.replica.minimum",
+        type = ConfigType.INT,
+        defaultValue = "2",
+        tags = {SCM, OZONE},
+        description = "The minimum number of container replicas which must " +
+            " be available for a node to enter maintenance. If putting a " +
+            " node into maintenance reduces the available replicas for any " +
+            " container below this level, the node will remain in the " +
+            " entering maintenance state until a new replica is created.")
+    private int maintenanceReplicaMinimum = 2;
+
+    public void setMaintenanceReplicaMinimum(int replicaCount) {
+      this.maintenanceReplicaMinimum = replicaCount;
+    }
+
     public long getInterval() {
       return interval;
     }
@@ -1063,6 +1175,10 @@
     public long getEventTimeout() {
       return eventTimeout;
     }
+
+    public int getMaintenanceReplicaMinimum() {
+      return maintenanceReplicaMinimum;
+    }
   }
 
   /**
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
index 19a5ab2..f8ffc02 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
@@ -143,7 +143,6 @@
   }
 
   @VisibleForTesting
-  // TODO: remove this later.
   public ContainerStateManager getContainerStateManager() {
     return containerStateManager;
   }
@@ -407,18 +406,27 @@
         ContainerID containerIdObject = new ContainerID(containerID);
         ContainerInfo containerInfo =
             containerStore.get(containerIdObject);
-        ContainerInfo containerInfoInMem = containerStateManager
-            .getContainer(containerIdObject);
-        if (containerInfo == null || containerInfoInMem == null) {
-          throw new SCMException("Failed to increment number of deleted " +
-              "blocks for container " + containerID + ", reason : " +
-              "container doesn't exist.", FAILED_TO_FIND_CONTAINER);
+        try {
+          ContainerInfo containerInfoInMem = containerStateManager
+              .getContainer(containerIdObject);
+          if (containerInfo == null || containerInfoInMem == null) {
+            throw new SCMException("Failed to increment number of deleted " +
+                "blocks for container " + containerID + ", reason : " +
+                "container doesn't exist.", FAILED_TO_FIND_CONTAINER);
+          }
+          containerInfo.updateDeleteTransactionId(entry.getValue());
+          containerInfo.setNumberOfKeys(containerInfoInMem.getNumberOfKeys());
+          containerInfo.setUsedBytes(containerInfoInMem.getUsedBytes());
+          containerStore.putWithBatch(batchOperation, containerIdObject,
+              containerInfo);
+        } catch (ContainerNotFoundException ex) {
+          // Container is not present therefore we don't need to update
+          // transaction id for this container.
+          LOG.warn(
+              "Failed to update the transaction Id as container: " + containerID
+                  + " for transaction: " + entry.getValue()
+                  + " does not exists");
         }
-        containerInfo.updateDeleteTransactionId(entry.getValue());
-        containerInfo.setNumberOfKeys(containerInfoInMem.getNumberOfKeys());
-        containerInfo.setUsedBytes(containerInfoInMem.getUsedBytes());
-        containerStore.putWithBatch(batchOperation, containerIdObject,
-            containerInfo);
       }
       batchHandler.commitBatchOperation(batchOperation);
       containerStateManager.updateDeleteTransactionId(deleteTransactionMap);
@@ -629,4 +637,9 @@
   public Lock getLock() {
     return lock;
   }
+
+  @VisibleForTesting
+  public Table<ContainerID, ContainerInfo> getContainerStore() {
+    return this.containerStore;
+  }
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
index 8cef966..bafab56 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
@@ -538,7 +538,7 @@
   private void checkIfContainerExist(ContainerID containerID)
       throws ContainerNotFoundException {
     if (!containerMap.containsKey(containerID)) {
-      throw new ContainerNotFoundException("Container with id #" +
+      throw new ContainerNotFoundException("Container with id " +
           containerID.getId() + " not found.");
     }
   }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
index d7caffe..ac2850a 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
@@ -162,6 +162,12 @@
       new TypedEvent<>(DatanodeDetails.class, "Dead_Node");
 
   /**
+   * This event will be triggered whenever a datanode is moved into maintenance.
+   */
+  public static final TypedEvent<DatanodeDetails> START_ADMIN_ON_NODE =
+      new TypedEvent<>(DatanodeDetails.class, "START_ADMIN_ON_NODE");
+
+  /**
    * This event will be triggered whenever a datanode is moved from non-healthy
    * state to healthy state.
    */
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/X509CertificateCodec.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/X509CertificateCodec.java
index 8c30a43..9bfa7d6 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/X509CertificateCodec.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/X509CertificateCodec.java
@@ -20,7 +20,7 @@
 package org.apache.hadoop.hdds.scm.metadata;
 
 import java.io.IOException;
-import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 import java.security.cert.CertificateException;
 import java.security.cert.X509Certificate;
 import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
@@ -35,7 +35,7 @@
   public byte[] toPersistedFormat(X509Certificate object) throws IOException {
     try {
       return CertificateCodec.getPEMEncodedString(object)
-          .getBytes(Charset.forName("UTF-8"));
+          .getBytes(StandardCharsets.UTF_8);
     } catch (SCMSecurityException exp) {
       throw new IOException(exp);
     }
@@ -45,7 +45,7 @@
   public X509Certificate fromPersistedFormat(byte[] rawData)
       throws IOException {
     try{
-      String s = new String(rawData, Charset.forName("UTF-8"));
+      String s = new String(rawData, StandardCharsets.UTF_8);
       return CertificateCodec.getX509Certificate(s);
     } catch (CertificateException exp) {
       throw new IOException(exp);
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitor.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitor.java
new file mode 100644
index 0000000..3466547
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitor.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+
+import java.util.Set;
+
+/**
+ * Interface used by the DatanodeAdminMonitor, which can be used to
+ * decommission or recommission nodes and take them in and out of maintenance.
+ */
+public interface DatanodeAdminMonitor extends Runnable {
+
+  void startMonitoring(DatanodeDetails dn);
+  void stopMonitoring(DatanodeDetails dn);
+  Set<DatanodeDetails> getTrackedNodes();
+
+}
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java
new file mode 100644
index 0000000..247a307
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitorImpl.java
@@ -0,0 +1,371 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
+import org.apache.hadoop.hdds.scm.container.ContainerReplicaCount;
+import org.apache.hadoop.hdds.scm.container.ReplicationManager;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.*;
+
+/**
+ * Monitor thread which watches for nodes to be decommissioned, recommissioned
+ * or placed into maintenance. Newly added nodes are queued in pendingNodes
+ * and recommissoned nodes are queued in cancelled nodes. On each monitor
+ * 'tick', the cancelled nodes are processed and removed from the monitor.
+ * Then any pending nodes are added to the trackedNodes set, where they stay
+ * until decommission or maintenance has ended.
+ * <p>
+ * Once an node is placed into tracked nodes, it goes through a workflow where
+ * the following happens:
+ * <p>
+ * 1. First an event is fired to close any pipelines on the node, which will
+ * also close any containers.
+ * 2. Next the containers on the node are obtained and checked to see if new
+ * replicas are needed. If so, the new replicas are scheduled.
+ * 3. After scheduling replication, the node remains pending until replication
+ * has completed.
+ * 4. At this stage the node will complete decommission or enter maintenance.
+ * 5. Maintenance nodes will remain tracked by this monitor until maintenance
+ * is manually ended, or the maintenance window expires.
+ */
+public class DatanodeAdminMonitorImpl implements DatanodeAdminMonitor {
+
+  private OzoneConfiguration conf;
+  private EventPublisher eventQueue;
+  private NodeManager nodeManager;
+  private ReplicationManager replicationManager;
+  private Queue<DatanodeDetails> pendingNodes = new ArrayDeque();
+  private Queue<DatanodeDetails> cancelledNodes = new ArrayDeque();
+  private Set<DatanodeDetails> trackedNodes = new HashSet<>();
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(DatanodeAdminMonitorImpl.class);
+
+  public DatanodeAdminMonitorImpl(
+      OzoneConfiguration conf,
+      EventPublisher eventQueue,
+      NodeManager nodeManager,
+      ReplicationManager replicationManager) {
+    this.conf = conf;
+    this.eventQueue = eventQueue;
+    this.nodeManager = nodeManager;
+    this.replicationManager = replicationManager;
+  }
+
+  /**
+   * Add a node to the decommission or maintenance workflow. The node will be
+   * queued and added to the workflow after a defined interval.
+   *
+   * @param dn         The datanode to move into an admin state
+   */
+  @Override
+  public synchronized void startMonitoring(DatanodeDetails dn) {
+    cancelledNodes.remove(dn);
+    pendingNodes.add(dn);
+  }
+
+  /**
+   * Remove a node from the decommission or maintenance workflow, and return it
+   * to service. The node will be queued and removed from decommission or
+   * maintenance after a defined interval.
+   *
+   * @param dn The datanode for which to stop decommission or maintenance.
+   */
+  @Override
+  public synchronized void stopMonitoring(DatanodeDetails dn) {
+    pendingNodes.remove(dn);
+    cancelledNodes.add(dn);
+  }
+
+  /**
+   * Get the set of nodes which are currently tracked in the decommissioned
+   * and maintenance workflow.
+   * @return An unmodifiable set of the tracked nodes.
+   */
+  @Override
+  public synchronized Set<DatanodeDetails> getTrackedNodes() {
+    return Collections.unmodifiableSet(trackedNodes);
+  }
+
+  /**
+   * Run an iteration of the monitor. This is the main run loop, and performs
+   * the following checks:
+   * <p>
+   * 1. Check for any cancelled nodes and process them
+   * 2. Check for any newly added nodes and add them to the workflow
+   * 3. Perform checks on the transitioning nodes and move them through the
+   * workflow until they have completed decommission or maintenance
+   */
+  @Override
+  public void run() {
+    try {
+      synchronized (this) {
+        processCancelledNodes();
+        processPendingNodes();
+      }
+      processTransitioningNodes();
+      if (trackedNodes.size() > 0 || pendingNodes.size() > 0) {
+        LOG.info("There are {} nodes tracked for decommission and " +
+                "maintenance. {} pending nodes.",
+            trackedNodes.size(), pendingNodes.size());
+      }
+    } catch (Exception e) {
+      LOG.error("Caught an error in the DatanodeAdminMonitor", e);
+      // Intentionally do not re-throw, as if we do the monitor thread
+      // will not get rescheduled.
+    }
+  }
+
+  public int getPendingCount() {
+    return pendingNodes.size();
+  }
+
+  public int getCancelledCount() {
+    return cancelledNodes.size();
+  }
+
+  public int getTrackedNodeCount() {
+    return trackedNodes.size();
+  }
+
+  private void processCancelledNodes() {
+    while (!cancelledNodes.isEmpty()) {
+      DatanodeDetails dn = cancelledNodes.poll();
+      try {
+        stopTrackingNode(dn);
+        putNodeBackInService(dn);
+        LOG.info("Recommissioned node {}", dn);
+      } catch (NodeNotFoundException e) {
+        LOG.warn("Failed processing the cancel admin request for {}", dn, e);
+      }
+    }
+  }
+
+  private void processPendingNodes() {
+    while (!pendingNodes.isEmpty()) {
+      startTrackingNode(pendingNodes.poll());
+    }
+  }
+
+  private void processTransitioningNodes() {
+    Iterator<DatanodeDetails> iterator = trackedNodes.iterator();
+    while (iterator.hasNext()) {
+      DatanodeDetails dn = iterator.next();
+      try {
+        NodeStatus status = getNodeStatus(dn);
+
+        if (!shouldContinueWorkflow(dn, status)) {
+          abortWorkflow(dn);
+          iterator.remove();
+          continue;
+        }
+
+        if (status.isMaintenance()) {
+          if (status.operationalStateExpired()) {
+            completeMaintenance(dn);
+            iterator.remove();
+            continue;
+          }
+        }
+
+        if (status.isDecommissioning() || status.isEnteringMaintenance()) {
+          if (checkPipelinesClosedOnNode(dn)
+              // Ensure the DN has received and persisted the current maint
+              // state.
+              && status.getOperationalState()
+                  == dn.getPersistedOpState()
+              && checkContainersReplicatedOnNode(dn)) {
+            // CheckContainersReplicatedOnNode may take a short time to run
+            // so after it completes, re-get the nodestatus to check the health
+            // and ensure the state is still good to continue
+            status = getNodeStatus(dn);
+            if (status.isDead()) {
+              LOG.warn("Datanode {} is dead and the admin workflow cannot " +
+                  "continue. The node will be put back to IN_SERVICE and " +
+                  "handled as a dead node", dn);
+              putNodeBackInService(dn);
+              iterator.remove();
+            } else if (status.isDecommissioning()) {
+              completeDecommission(dn);
+              iterator.remove();
+            } else if (status.isEnteringMaintenance()) {
+              putIntoMaintenance(dn);
+            }
+          }
+        }
+
+      } catch (NodeNotFoundException e) {
+        LOG.error("An unexpected error occurred processing datanode {}. " +
+            "Aborting the admin workflow", dn, e);
+        abortWorkflow(dn);
+        iterator.remove();
+      }
+    }
+  }
+
+  /**
+   * Checks if a node is in an unexpected state or has gone dead while
+   * decommissioning or entering maintenance. If the node is not in a valid
+   * state to continue the admin workflow, return false, otherwise return true.
+   *
+   * @param dn         The Datanode for which to check the current state
+   * @param nodeStatus The current NodeStatus for the datanode
+   * @return True if admin can continue, false otherwise
+   */
+  private boolean shouldContinueWorkflow(DatanodeDetails dn,
+      NodeStatus nodeStatus) {
+    if (!nodeStatus.isDecommission() && !nodeStatus.isMaintenance()) {
+      LOG.warn("Datanode {} has an operational state of {} when it should " +
+              "be undergoing decommission or maintenance. Aborting admin for " +
+              "this node.", dn, nodeStatus.getOperationalState());
+      return false;
+    }
+    if (nodeStatus.isDead() && !nodeStatus.isInMaintenance()) {
+      LOG.error("Datanode {} is dead but is not IN_MAINTENANCE. Aborting the " +
+          "admin workflow for this node", dn);
+      return false;
+    }
+    return true;
+  }
+
+  private boolean checkPipelinesClosedOnNode(DatanodeDetails dn)
+      throws NodeNotFoundException {
+    Set<PipelineID> pipelines = nodeManager.getPipelines(dn);
+    NodeStatus status = nodeManager.getNodeStatus(dn);
+    if (pipelines == null || pipelines.size() == 0
+        || status.operationalStateExpired()) {
+      return true;
+    } else {
+      LOG.info("Waiting for pipelines to close for {}. There are {} " +
+          "pipelines", dn, pipelines.size());
+      return false;
+    }
+  }
+
+  private boolean checkContainersReplicatedOnNode(DatanodeDetails dn)
+      throws NodeNotFoundException {
+    int sufficientlyReplicated = 0;
+    int underReplicated = 0;
+    int unhealthy = 0;
+    Set<ContainerID> containers =
+        nodeManager.getContainers(dn);
+    for (ContainerID cid : containers) {
+      try {
+        ContainerReplicaCount replicaSet =
+            replicationManager.getContainerReplicaCount(cid);
+        if (replicaSet.isSufficientlyReplicated()) {
+          sufficientlyReplicated++;
+        } else {
+          underReplicated++;
+        }
+        if (!replicaSet.isHealthy()) {
+          unhealthy++;
+        }
+      } catch (ContainerNotFoundException e) {
+        LOG.warn("ContainerID {} present in node list for {} but not found " +
+            "in containerManager", cid, dn);
+      }
+    }
+    LOG.info("{} has {} sufficientlyReplicated, {} underReplicated and {} " +
+        "unhealthy containers",
+        dn, sufficientlyReplicated, underReplicated, unhealthy);
+    return underReplicated == 0 && unhealthy == 0;
+  }
+
+  private void completeDecommission(DatanodeDetails dn)
+      throws NodeNotFoundException {
+    setNodeOpState(dn, NodeOperationalState.DECOMMISSIONED);
+    LOG.info("Datanode {} has completed the admin workflow. The operational " +
+            "state has been set to {}", dn,
+        NodeOperationalState.DECOMMISSIONED);
+  }
+
+  private void putIntoMaintenance(DatanodeDetails dn)
+      throws NodeNotFoundException {
+    LOG.info("Datanode {} has entered maintenance", dn);
+    setNodeOpState(dn, NodeOperationalState.IN_MAINTENANCE);
+  }
+
+  private void completeMaintenance(DatanodeDetails dn)
+      throws NodeNotFoundException {
+    // The end state of Maintenance is to put the node back IN_SERVICE, whether
+    // it is dead or not.
+    LOG.info("Datanode {} has ended maintenance automatically", dn);
+    putNodeBackInService(dn);
+  }
+
+  private void startTrackingNode(DatanodeDetails dn) {
+    eventQueue.fireEvent(SCMEvents.START_ADMIN_ON_NODE, dn);
+    trackedNodes.add(dn);
+  }
+
+  private void stopTrackingNode(DatanodeDetails dn) {
+    trackedNodes.remove(dn);
+  }
+
+  /**
+   * If we encounter an unexpected condition in maintenance, we must abort the
+   * workflow by setting the node operationalState back to IN_SERVICE and then
+   * remove the node from tracking.
+   *
+   * @param dn The datanode for which to abort tracking
+   */
+  private void abortWorkflow(DatanodeDetails dn) {
+    try {
+      putNodeBackInService(dn);
+    } catch (NodeNotFoundException e) {
+      LOG.error("Unable to set the node OperationalState for {} while " +
+          "aborting the datanode admin workflow", dn);
+    }
+  }
+
+  private void putNodeBackInService(DatanodeDetails dn)
+      throws NodeNotFoundException {
+    setNodeOpState(dn, NodeOperationalState.IN_SERVICE);
+  }
+
+  private void setNodeOpState(DatanodeDetails dn,
+      HddsProtos.NodeOperationalState state) throws NodeNotFoundException {
+    long expiry = 0;
+    if ((state == NodeOperationalState.IN_MAINTENANCE)
+        || (state == NodeOperationalState.ENTERING_MAINTENANCE)) {
+      NodeStatus status = nodeManager.getNodeStatus(dn);
+      expiry = status.getOpStateExpiryEpochSeconds();
+    }
+    nodeManager.setNodeOperationalState(dn, state, expiry);
+  }
+
+  private NodeStatus getNodeStatus(DatanodeDetails dnd)
+      throws NodeNotFoundException {
+    return nodeManager.getNodeStatus(dnd);
+  }
+
+}
\ No newline at end of file
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
index 92ae43b..d80f3f1 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.hdds.scm.node;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.LayoutVersionProto;
@@ -47,14 +48,17 @@
   private List<MetadataStorageReportProto> metadataStorageReports;
   private LayoutVersionProto lastKnownLayoutVersion;
 
+  private NodeStatus nodeStatus;
+
   /**
    * Constructs DatanodeInfo from DatanodeDetails.
    *
    * @param datanodeDetails Details about the datanode
+   * @param nodeStatus Node Status
    * @param layoutInfo Details about the LayoutVersionProto
    */
-  public DatanodeInfo(DatanodeDetails datanodeDetails,
-                      LayoutVersionProto layoutInfo) {
+  public DatanodeInfo(DatanodeDetails datanodeDetails, NodeStatus nodeStatus,
+                        LayoutVersionProto layoutInfo) {
     super(datanodeDetails);
     this.lock = new ReentrantReadWriteLock();
     this.lastHeartbeatTime = Time.monotonicNow();
@@ -66,6 +70,7 @@
                 layoutInfo.getSoftwareLayoutVersion() : 0)
             .build();
     this.storageReports = Collections.emptyList();
+    this.nodeStatus = nodeStatus;
     this.metadataStorageReports = Collections.emptyList();
   }
 
@@ -73,9 +78,20 @@
    * Updates the last heartbeat time with current time.
    */
   public void updateLastHeartbeatTime() {
+    updateLastHeartbeatTime(Time.monotonicNow());
+  }
+
+  /**
+   * Sets the last heartbeat time to a given value. Intended to be used
+   * only for tests.
+   *
+   * @param milliSecondsSinceEpoch - ms since Epoch to set as the heartbeat time
+   */
+  @VisibleForTesting
+  public void updateLastHeartbeatTime(long milliSecondsSinceEpoch) {
     try {
       lock.writeLock().lock();
-      lastHeartbeatTime = Time.monotonicNow();
+      lastHeartbeatTime = milliSecondsSinceEpoch;
     } finally {
       lock.writeLock().unlock();
     }
@@ -215,6 +231,37 @@
     return lastStatsUpdatedTime;
   }
 
+  /**
+   * Return the current NodeStatus for the datanode.
+   *
+   * @return NodeStatus - the current nodeStatus
+   */
+  public NodeStatus getNodeStatus() {
+    try {
+      lock.readLock().lock();
+      return nodeStatus;
+    } finally {
+      lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Update the NodeStatus for this datanode. When using this method
+   * be ware of the potential for lost updates if two threads read the
+   * current status, update one field and then write it back without
+   * locking enforced outside of this class.
+   *
+   * @param newNodeStatus - the new NodeStatus object
+   */
+  public void setNodeStatus(NodeStatus newNodeStatus) {
+    try {
+      lock.writeLock().lock();
+      this.nodeStatus = newNodeStatus;
+    } finally {
+      lock.writeLock().unlock();
+    }
+  }
+
   @Override
   public int hashCode() {
     return super.hashCode();
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
index 6a56fc3..b4fc28a 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
@@ -78,8 +78,11 @@
       destroyPipelines(datanodeDetails);
       closeContainers(datanodeDetails, publisher);
 
-      // Remove the container replicas associated with the dead node.
-      removeContainerReplicas(datanodeDetails);
+      // Remove the container replicas associated with the dead node unless it
+      // is IN_MAINTENANCE
+      if (!nodeManager.getNodeStatus(datanodeDetails).isInMaintenance()) {
+        removeContainerReplicas(datanodeDetails);
+      }
 
     } catch (NodeNotFoundException ex) {
       // This should not happen, we cannot get a dead node event for an
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/InvalidHostStringException.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/InvalidHostStringException.java
new file mode 100644
index 0000000..c4046c1
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/InvalidHostStringException.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import java.io.IOException;
+
+/**
+ * Exception thrown by the NodeDecommissionManager when it encounters
+ * host strings it does not expect or understand.
+ */
+
+public class InvalidHostStringException extends IOException {
+  public InvalidHostStringException(String msg) {
+    super(msg);
+  }
+
+  public InvalidHostStringException(String msg, Exception e) {
+    super(msg, e);
+  }
+}
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/InvalidNodeStateException.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/InvalidNodeStateException.java
new file mode 100644
index 0000000..9c82398
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/InvalidNodeStateException.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import java.io.IOException;
+
+/**
+ * Exception thrown by the NodeDecommissionManager when it encounters
+ * host strings it does not expect or understand.
+ */
+
+public class InvalidNodeStateException extends IOException {
+  public InvalidNodeStateException(String msg) {
+    super(msg);
+  }
+
+  public InvalidNodeStateException(String msg, Exception e) {
+    super(msg, e);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
index a40a63a..f0f9b72 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
@@ -20,9 +20,13 @@
 
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
 import org.apache.hadoop.hdds.server.events.EventHandler;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Handles New Node event.
@@ -30,11 +34,16 @@
 public class NewNodeHandler implements EventHandler<DatanodeDetails> {
 
   private final PipelineManager pipelineManager;
+  private final NodeDecommissionManager decommissionManager;
   private final ConfigurationSource conf;
+  private static final Logger LOG =
+      LoggerFactory.getLogger(NewNodeHandler.class);
 
   public NewNodeHandler(PipelineManager pipelineManager,
+      NodeDecommissionManager decommissionManager,
       ConfigurationSource conf) {
     this.pipelineManager = pipelineManager;
+    this.decommissionManager = decommissionManager;
     this.conf = conf;
   }
 
@@ -42,5 +51,16 @@
   public void onMessage(DatanodeDetails datanodeDetails,
       EventPublisher publisher) {
     pipelineManager.triggerPipelineCreation();
+    if (datanodeDetails.getPersistedOpState()
+        != HddsProtos.NodeOperationalState.IN_SERVICE) {
+      try {
+        decommissionManager.continueAdminForNode(datanodeDetails);
+      } catch (NodeNotFoundException e) {
+        // Should not happen, as the node has just registered to call this event
+        // handler.
+        LOG.warn("NodeNotFound when adding the node to the decommissionManager",
+            e);
+      }
+    }
   }
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeDecommissionManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeDecommissionManager.java
new file mode 100644
index 0000000..30cae10
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeDecommissionManager.java
@@ -0,0 +1,369 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.container.ReplicationManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.net.InetAddress;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.UnknownHostException;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Class used to manage datanodes scheduled for maintenance or decommission.
+ */
+public class NodeDecommissionManager {
+
+  private ScheduledExecutorService executor;
+  private DatanodeAdminMonitor monitor;
+
+  private NodeManager nodeManager;
+  //private ContainerManager containerManager;
+  private EventPublisher eventQueue;
+  private ReplicationManager replicationManager;
+  private OzoneConfiguration conf;
+  private boolean useHostnames;
+  private long monitorInterval;
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(NodeDecommissionManager.class);
+
+  static class HostDefinition {
+    private String rawHostname;
+    private String hostname;
+    private int port;
+
+    HostDefinition(String hostname) throws InvalidHostStringException {
+      this.rawHostname = hostname;
+      parseHostname();
+    }
+
+    public String getRawHostname() {
+      return rawHostname;
+    }
+
+    public String getHostname() {
+      return hostname;
+    }
+
+    public int getPort() {
+      return port;
+    }
+
+    private void parseHostname() throws InvalidHostStringException{
+      try {
+        // A URI *must* have a scheme, so just create a fake one
+        URI uri = new URI("empty://"+rawHostname.trim());
+        this.hostname = uri.getHost();
+        this.port = uri.getPort();
+
+        if (this.hostname == null) {
+          throw new InvalidHostStringException("The string "+rawHostname+
+              " does not contain a value hostname or hostname:port definition");
+        }
+      } catch (URISyntaxException e) {
+        throw new InvalidHostStringException(
+            "Unable to parse the hoststring "+rawHostname, e);
+      }
+    }
+  }
+
+  private List<DatanodeDetails> mapHostnamesToDatanodes(List<String> hosts)
+      throws InvalidHostStringException {
+    List<DatanodeDetails> results = new LinkedList<>();
+    for (String hostString : hosts) {
+      HostDefinition host = new HostDefinition(hostString);
+      InetAddress addr;
+      try {
+        addr = InetAddress.getByName(host.getHostname());
+      } catch (UnknownHostException e) {
+        throw new InvalidHostStringException("Unable to resolve the host "
+            +host.getRawHostname(), e);
+      }
+      String dnsName;
+      if (useHostnames) {
+        dnsName = addr.getHostName();
+      } else {
+        dnsName = addr.getHostAddress();
+      }
+      List<DatanodeDetails> found = nodeManager.getNodesByAddress(dnsName);
+      if (found.size() == 0) {
+        throw new InvalidHostStringException("The string " +
+            host.getRawHostname()+" resolved to "+dnsName +
+            " is not found in SCM");
+      } else if (found.size() == 1) {
+        if (host.getPort() != -1 &&
+            !validateDNPortMatch(host.getPort(), found.get(0))) {
+          throw new InvalidHostStringException("The string "+
+              host.getRawHostname()+" matched a single datanode, but the "+
+              "given port is not used by that Datanode");
+        }
+        results.add(found.get(0));
+      } else if (found.size() > 1) {
+        DatanodeDetails match = null;
+        for(DatanodeDetails dn : found) {
+          if (validateDNPortMatch(host.getPort(), dn)) {
+            match = dn;
+            break;
+          }
+        }
+        if (match == null) {
+          throw new InvalidHostStringException("The string " +
+              host.getRawHostname()+ "matched multiple Datanodes, but no "+
+              "datanode port matched the given port");
+        }
+        results.add(match);
+      }
+    }
+    return results;
+  }
+
+  /**
+   * Check if the passed port is used by the given DatanodeDetails object. If
+   * it is, return true, otherwise return false.
+   * @param port Port number to check if it is used by the datanode
+   * @param dn Datanode to check if it is using the given port
+   * @return True if port is used by the datanode. False otherwise.
+   */
+  private boolean validateDNPortMatch(int port, DatanodeDetails dn) {
+    for (DatanodeDetails.Port p : dn.getPorts()) {
+      if (p.getValue() == port) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  public NodeDecommissionManager(OzoneConfiguration config, NodeManager nm,
+      ContainerManager containerManager,
+      EventPublisher eventQueue, ReplicationManager rm) {
+    this.nodeManager = nm;
+    conf = config;
+    //this.containerManager = containerManager;
+    this.eventQueue = eventQueue;
+    this.replicationManager = rm;
+
+    executor = Executors.newScheduledThreadPool(1,
+        new ThreadFactoryBuilder().setNameFormat("DatanodeAdminManager-%d")
+            .setDaemon(true).build());
+
+    useHostnames = conf.getBoolean(
+        DFSConfigKeys.DFS_DATANODE_USE_DN_HOSTNAME,
+        DFSConfigKeys.DFS_DATANODE_USE_DN_HOSTNAME_DEFAULT);
+
+    monitorInterval = conf.getTimeDuration(
+        ScmConfigKeys.OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL,
+        ScmConfigKeys.OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL_DEFAULT,
+        TimeUnit.SECONDS);
+    if (monitorInterval <= 0) {
+      LOG.warn("{} must be greater than zero, defaulting to {}",
+          ScmConfigKeys.OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL,
+          ScmConfigKeys.OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL_DEFAULT);
+      conf.set(ScmConfigKeys.OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL,
+          ScmConfigKeys.OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL_DEFAULT);
+      monitorInterval = conf.getTimeDuration(
+          ScmConfigKeys.OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL,
+          ScmConfigKeys.OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL_DEFAULT,
+          TimeUnit.SECONDS);
+    }
+
+    monitor = new DatanodeAdminMonitorImpl(conf, eventQueue, nodeManager,
+        replicationManager);
+
+    executor.scheduleAtFixedRate(monitor, monitorInterval, monitorInterval,
+        TimeUnit.SECONDS);
+  }
+
+  @VisibleForTesting
+  public DatanodeAdminMonitor getMonitor() {
+    return monitor;
+  }
+
+  public synchronized void decommissionNodes(List nodes)
+      throws InvalidHostStringException {
+    List<DatanodeDetails> dns = mapHostnamesToDatanodes(nodes);
+    for (DatanodeDetails dn : dns) {
+      try {
+        startDecommission(dn);
+      } catch (NodeNotFoundException e) {
+        // We already validated the host strings and retrieved the DnDetails
+        // object from the node manager. Therefore we should never get a
+        // NodeNotFoundException here expect if the node is remove in the
+        // very short window between validation and starting decom. Therefore
+        // log a warning and ignore the exception
+        LOG.warn("The host {} was not found in SCM. Ignoring the request to "+
+            "decommission it", dn.getHostName());
+      } catch (InvalidNodeStateException e) {
+        // TODO - decide how to handle this. We may not want to fail all nodes
+        //        only one is in a bad state, as some nodes may have been OK
+        //        and already processed. Perhaps we should return a list of
+        //        error and feed that all the way back to the client?
+      }
+    }
+  }
+
+  /**
+   * If a SCM is restarted, then upon re-registration the datanode will already
+   * be in DECOMMISSIONING or ENTERING_MAINTENANCE state. In that case, it
+   * needs to be added back into the monitor to track its progress.
+   * @param dn Datanode to add back to tracking.
+   * @throws NodeNotFoundException
+   */
+  public synchronized void continueAdminForNode(DatanodeDetails dn)
+      throws NodeNotFoundException {
+    NodeOperationalState opState = getNodeStatus(dn).getOperationalState();
+    if (opState == NodeOperationalState.DECOMMISSIONING
+        || opState == NodeOperationalState.ENTERING_MAINTENANCE
+        || opState == NodeOperationalState.IN_MAINTENANCE) {
+      monitor.startMonitoring(dn);
+    }
+  }
+
+  public synchronized void startDecommission(DatanodeDetails dn)
+      throws NodeNotFoundException, InvalidNodeStateException {
+    NodeStatus nodeStatus = getNodeStatus(dn);
+    NodeOperationalState opState = nodeStatus.getOperationalState();
+    if (opState == NodeOperationalState.IN_SERVICE) {
+      LOG.info("Starting Decommission for node {}", dn);
+      nodeManager.setNodeOperationalState(
+          dn, NodeOperationalState.DECOMMISSIONING);
+      monitor.startMonitoring(dn);
+    } else if (nodeStatus.isDecommission()) {
+      LOG.info("Start Decommission called on node {} in state {}. Nothing to "+
+          "do.", dn, opState);
+    } else {
+      LOG.error("Cannot decommission node {} in state {}", dn, opState);
+      throw new InvalidNodeStateException("Cannot decommission node "+
+          dn +" in state "+ opState);
+    }
+  }
+
+  public synchronized void recommissionNodes(List nodes)
+      throws InvalidHostStringException {
+    List<DatanodeDetails> dns = mapHostnamesToDatanodes(nodes);
+    for (DatanodeDetails dn : dns) {
+      try {
+        recommission(dn);
+      } catch (NodeNotFoundException e) {
+        // We already validated the host strings and retrieved the DnDetails
+        // object from the node manager. Therefore we should never get a
+        // NodeNotFoundException here expect if the node is remove in the
+        // very short window between validation and starting decom. Therefore
+        // log a warning and ignore the exception
+        LOG.warn("The host {} was not found in SCM. Ignoring the request to "+
+            "recommission it", dn.getHostName());
+      }
+    }
+  }
+
+  public synchronized void recommission(DatanodeDetails dn)
+      throws NodeNotFoundException{
+    NodeStatus nodeStatus = getNodeStatus(dn);
+    NodeOperationalState opState = nodeStatus.getOperationalState();
+    if (opState != NodeOperationalState.IN_SERVICE) {
+      // The node will be set back to IN_SERVICE when it is processed by the
+      // monitor
+      monitor.stopMonitoring(dn);
+      LOG.info("Queued node {} for recommission", dn);
+    } else {
+      LOG.info("Recommission called on node {} with state {}. "+
+          "Nothing to do.", dn, opState);
+    }
+  }
+
+  public synchronized void startMaintenanceNodes(List nodes, int endInHours)
+      throws InvalidHostStringException {
+    List<DatanodeDetails> dns = mapHostnamesToDatanodes(nodes);
+    for (DatanodeDetails dn : dns) {
+      try {
+        startMaintenance(dn, endInHours);
+      } catch (NodeNotFoundException e) {
+        // We already validated the host strings and retrieved the DnDetails
+        // object from the node manager. Therefore we should never get a
+        // NodeNotFoundException here expect if the node is remove in the
+        // very short window between validation and starting decom. Therefore
+        // log a warning and ignore the exception
+        LOG.warn("The host {} was not found in SCM. Ignoring the request to "+
+            "start maintenance on it", dn.getHostName());
+      } catch (InvalidNodeStateException e) {
+        // TODO - decide how to handle this. We may not want to fail all nodes
+        //        only one is in a bad state, as some nodes may have been OK
+        //        and already processed. Perhaps we should return a list of
+        //        error and feed that all the way back to the client?
+      }
+    }
+  }
+
+  // TODO - If startMaintenance is called on a host already in maintenance,
+  //        then we should update the end time?
+  public synchronized void startMaintenance(DatanodeDetails dn, int endInHours)
+      throws NodeNotFoundException, InvalidNodeStateException {
+    NodeStatus nodeStatus = getNodeStatus(dn);
+    NodeOperationalState opState = nodeStatus.getOperationalState();
+
+    long maintenanceEnd = 0;
+    if (endInHours != 0) {
+      maintenanceEnd =
+          (System.currentTimeMillis() / 1000L) + (endInHours * 60L * 60L);
+    }
+    if (opState == NodeOperationalState.IN_SERVICE) {
+      nodeManager.setNodeOperationalState(
+          dn, NodeOperationalState.ENTERING_MAINTENANCE, maintenanceEnd);
+      monitor.startMonitoring(dn);
+      LOG.info("Starting Maintenance for node {}", dn);
+    } else if (nodeStatus.isMaintenance()) {
+      LOG.info("Starting Maintenance called on node {} with state {}. "+
+          "Nothing to do.", dn, opState);
+    } else {
+      LOG.error("Cannot start maintenance on node {} in state {}", dn, opState);
+      throw new InvalidNodeStateException("Cannot start maintenance on node "+
+          dn +" in state "+ opState);
+    }
+  }
+
+  /**
+   *  Stops the decommission monitor from running when SCM is shutdown.
+   */
+  public void stop() {
+    if (executor != null) {
+      executor.shutdown();
+    }
+  }
+
+  private NodeStatus getNodeStatus(DatanodeDetails dn)
+      throws NodeNotFoundException {
+    return nodeManager.getNodeStatus(dn);
+  }
+
+}
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
index 48c9e04..17bf6b6 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
@@ -28,6 +28,7 @@
 import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
 import org.apache.hadoop.hdds.server.events.EventHandler;
 import org.apache.hadoop.hdds.upgrade.HDDSLayoutVersionManager;
 import org.apache.hadoop.ozone.protocol.StorageContainerNodeProtocol;
@@ -66,18 +67,38 @@
     EventHandler<CommandForDatanode>, NodeManagerMXBean, Closeable {
 
   /**
-   * Gets all Live Datanodes that is currently communicating with SCM.
-   * @param nodeState - State of the node
+   * Gets all Live Datanodes that are currently communicating with SCM.
+   * @param nodeStatus - Status of the node to return
    * @return List of Datanodes that are Heartbeating SCM.
    */
-  List<DatanodeDetails> getNodes(NodeState nodeState);
+  List<DatanodeDetails> getNodes(NodeStatus nodeStatus);
 
   /**
-   * Returns the Number of Datanodes that are communicating with SCM.
-   * @param nodeState - State of the node
+   * Gets all Live Datanodes that is currently communicating with SCM.
+   * @param opState - The operational state of the node
+   * @param health - The health of the node
+   * @return List of Datanodes that are Heartbeating SCM.
+   */
+  List<DatanodeDetails> getNodes(
+      NodeOperationalState opState, NodeState health);
+
+  /**
+   * Returns the Number of Datanodes that are communicating with SCM with the
+   * given status.
+   * @param nodeStatus - State of the node
    * @return int -- count
    */
-  int getNodeCount(NodeState nodeState);
+  int getNodeCount(NodeStatus nodeStatus);
+
+  /**
+   * Returns the Number of Datanodes that are communicating with SCM in the
+   * given state.
+   * @param opState - The operational state of the node
+   * @param health - The health of the node
+   * @return int -- count
+   */
+  int getNodeCount(
+      NodeOperationalState opState, NodeState health);
 
   /**
    * Get all datanodes known to SCM.
@@ -107,11 +128,33 @@
   SCMNodeMetric getNodeStat(DatanodeDetails datanodeDetails);
 
   /**
-   * Returns the node state of a specific node.
+   * Returns the node status of a specific node.
    * @param datanodeDetails DatanodeDetails
-   * @return Healthy/Stale/Dead.
+   * @return NodeStatus for the node
+   * @throws NodeNotFoundException if the node does not exist
    */
-  NodeState getNodeState(DatanodeDetails datanodeDetails);
+  NodeStatus getNodeStatus(DatanodeDetails datanodeDetails)
+      throws NodeNotFoundException;
+
+  /**
+   * Set the operation state of a node.
+   * @param datanodeDetails The datanode to set the new state for
+   * @param newState The new operational state for the node
+   */
+  void setNodeOperationalState(DatanodeDetails datanodeDetails,
+      NodeOperationalState newState) throws NodeNotFoundException;
+
+  /**
+   * Set the operation state of a node.
+   * @param datanodeDetails The datanode to set the new state for
+   * @param newState The new operational state for the node
+   * @param opStateExpiryEpocSec Seconds from the epoch when the operational
+   *                             state should end. Zero indicates the state
+   *                             never end.
+   */
+  void setNodeOperationalState(DatanodeDetails datanodeDetails,
+       NodeOperationalState newState,
+       long opStateExpiryEpocSec) throws NodeNotFoundException;
 
   /**
    * Get set of pipelines a datanode is part of.
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java
index 3e7ecf7..c1c6d0d 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManagerMXBean.java
@@ -34,7 +34,7 @@
    *
    * @return A state to number of nodes that in this state mapping
    */
-  Map<String, Integer> getNodeCount();
+  Map<String, Map<String, Integer>> getNodeCount();
 
   /**
    * Get the disk metrics like capacity, usage and remaining based on the
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
index a01cca73..b70bade 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
@@ -19,7 +19,6 @@
 package org.apache.hadoop.hdds.scm.node;
 
 import java.io.Closeable;
-import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
@@ -33,9 +32,11 @@
 
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.LayoutVersionProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.events.SCMEvents;
 import org.apache.hadoop.hdds.scm.node.states.Node2PipelineMap;
@@ -58,14 +59,14 @@
 import com.google.common.util.concurrent.ThreadFactoryBuilder;
 
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONED;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONING;
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY_READONLY;
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DEADNODE_INTERVAL;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_HEARTBEAT_PROCESS_INTERVAL;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_STALENODE_INTERVAL;
+import static org.apache.hadoop.hdds.scm.events.SCMEvents.NON_HEALTHY_TO_READONLY_HEALTHY_NODE;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -89,8 +90,7 @@
    * Node's life cycle events.
    */
   private enum NodeLifeCycleEvent {
-    TIMEOUT, RESTORE, RESURRECT, DECOMMISSION, DECOMMISSIONED, LAYOUT_MISMATCH,
-    LAYOUT_MATCH
+    TIMEOUT, RESTORE, RESURRECT, LAYOUT_MISMATCH, LAYOUT_MATCH
   }
 
   private static final Logger LOG = LoggerFactory
@@ -100,7 +100,7 @@
   /**
    * StateMachine for node lifecycle.
    */
-  private final StateMachine<NodeState, NodeLifeCycleEvent> stateMachine;
+  private final StateMachine<NodeState, NodeLifeCycleEvent> nodeHealthSM;
   /**
    * This is the map which maintains the current state of all datanodes.
    */
@@ -173,11 +173,10 @@
     this.state2EventMap = new HashMap<>();
     initialiseState2EventMap();
     Set<NodeState> finalStates = new HashSet<>();
-    finalStates.add(DECOMMISSIONED);
     // All DataNodes should start in HealthyReadOnly state.
-    this.stateMachine = new StateMachine<>(NodeState.HEALTHY_READONLY,
+    this.nodeHealthSM = new StateMachine<>(NodeState.HEALTHY_READONLY,
         finalStates);
-    initializeStateMachine();
+    initializeStateMachines();
     heartbeatCheckerIntervalMs = HddsServerUtil
         .getScmheartbeatCheckerInterval(conf);
     staleNodeIntervalMs = HddsServerUtil.getStaleNodeInterval(conf);
@@ -207,7 +206,7 @@
         .put(HEALTHY, SCMEvents.READ_ONLY_HEALTHY_TO_HEALTHY_NODE);
     state2EventMap
         .put(NodeState.HEALTHY_READONLY,
-            SCMEvents.NON_HEALTHY_TO_READONLY_HEALTHY_NODE);
+            NON_HEALTHY_TO_READONLY_HEALTHY_NODE);
   }
 
   /*
@@ -217,18 +216,12 @@
    * State: HEALTHY             -------------------> STALE
    * Event:                          TIMEOUT
    *
-   * State: HEALTHY             -------------------> DECOMMISSIONING
-   * Event:                        DECOMMISSION
-   *
    * State: HEALTHY             -------------------> HEALTHY_READONLY
    * Event:                       LAYOUT_MISMATCH
    *
    * State: HEALTHY_READONLY    -------------------> HEALTHY
    * Event:                       LAYOUT_MATCH
    *
-   * State: HEALTHY_READONLY    -------------------> DECOMMISSIONING
-   * Event:                        DECOMMISSION
-   *
    * State: HEALTHY_READONLY    -------------------> STALE
    * Event:                          TIMEOUT
    *
@@ -241,95 +234,41 @@
    * State: STALE           -------------------> DEAD
    * Event:                       TIMEOUT
    *
-   * State: STALE           -------------------> DECOMMISSIONING
-   * Event:                     DECOMMISSION
-   *
-   * State: DEAD            -------------------> DECOMMISSIONING
-   * Event:                     DECOMMISSION
-   *
-   * State: DECOMMISSIONING -------------------> DECOMMISSIONED
-   * Event:                     DECOMMISSIONED
-   *
    *  Node State Flow
    *
-   *                                      +->------------------->------+
-   *                                      |                            |
-   *                                      |(DECOMMISSION)              |
-   *                                      ^                            V
-   *                                      |  +-----<---------<---+     |
-   *                                      |  |    (RESURRECT)    |     |
-   *    +-->-----(LAYOUT_MISMATCH)-->--+  |  V                   |     |
-   *    |                              |  |  |                   ^     |
-   *    |                              |  ^  |                   |     |
-   *    |                              V  |  V                   |     |
-   *    |  +-----(LAYOUT_MATCH)--[HEALTHY_READONLY]              |     |
-   *    |  |                            ^  |                     |     V
-   *    |  |                            |  |                     ^     |
-   *    |  |                            |  |(TIMEOUT)            |     |
-   *    ^  |                  (RESTORE) |  |                     |     |
-   *    |  V                            |  V                     |     |
-   * [HEALTHY]---->----------------->[STALE]------->--------->[DEAD]   |
-   *    |           (TIMEOUT)         |         (TIMEOUT)       |      |
-   *    |                             |                         |      |
-   *    V                             |                         V      |
-   *    |                             |                         |      V
-   *    |                             |                         |      |
-   *    |                             |                         |      |
-   *    |(DECOMMISSION)               | (DECOMMISSION)          |(DECOMMISSION)
-   *    |                             V                         |      |
-   *    +---->---------------->[DECOMMISSIONING]<---------------+      |
-   *                                   |   ^                           |
-   *                                   |   |                           V
-   *                                   V   |                           |
-   *                                   |   +-----------<----------<----+
-   *                                   |
-   *                                   |
-   *                                   | (DECOMMISSIONED)
-   *                                   |
-   *                                   V
-   *                          [DECOMMISSIONED]
+   *                                        +-----<---------<---+
+   *                                        |    (RESURRECT)    |
+   *    +-->-----(LAYOUT_MISMATCH)-->--+    V                   |
+   *    |                              |    |                   ^
+   *    |                              |    |                   |
+   *    |                              V    V                   |
+   *    |  +-----(LAYOUT_MATCH)--[HEALTHY_READONLY]             |
+   *    |  |                            ^  |                    |
+   *    |  |                            |  |                    ^
+   *    |  |                            |  |(TIMEOUT)           |
+   *    ^  |                  (RESTORE) |  |                    |
+   *    |  V                            |  V                    |
+   * [HEALTHY]---->----------------->[STALE]------->--------->[DEAD]
+   *               (TIMEOUT)                  (TIMEOUT)             
    *
    */
 
   /**
    * Initializes the lifecycle of node state machine.
    */
-  private void initializeStateMachine() {
-    stateMachine.addTransition(
-        HEALTHY_READONLY, HEALTHY,
+  private void initializeStateMachines() {
+    nodeHealthSM.addTransition(HEALTHY_READONLY, HEALTHY,
         NodeLifeCycleEvent.LAYOUT_MATCH);
-    stateMachine.addTransition(
-        HEALTHY_READONLY, STALE,
+    nodeHealthSM.addTransition(HEALTHY_READONLY, STALE,
         NodeLifeCycleEvent.TIMEOUT);
-    stateMachine.addTransition(
-        HEALTHY_READONLY, DECOMMISSIONING,
-        NodeLifeCycleEvent.DECOMMISSION);
-    stateMachine.addTransition(
-        HEALTHY, STALE, NodeLifeCycleEvent.TIMEOUT);
-    stateMachine.addTransition(
-        HEALTHY, HEALTHY_READONLY,
+    nodeHealthSM.addTransition(HEALTHY, STALE, NodeLifeCycleEvent.TIMEOUT);
+    nodeHealthSM.addTransition(HEALTHY, HEALTHY_READONLY,
         NodeLifeCycleEvent.LAYOUT_MISMATCH);
-    stateMachine.addTransition(
-        STALE, DEAD, NodeLifeCycleEvent.TIMEOUT);
-    stateMachine.addTransition(
-        STALE, HEALTHY_READONLY,
+    nodeHealthSM.addTransition(STALE, DEAD, NodeLifeCycleEvent.TIMEOUT);
+    nodeHealthSM.addTransition(STALE, HEALTHY_READONLY,
         NodeLifeCycleEvent.RESTORE);
-    stateMachine.addTransition(
-        DEAD, HEALTHY_READONLY,
+    nodeHealthSM.addTransition(DEAD, HEALTHY_READONLY,
         NodeLifeCycleEvent.RESURRECT);
-    stateMachine.addTransition(
-        HEALTHY, DECOMMISSIONING,
-        NodeLifeCycleEvent.DECOMMISSION);
-    stateMachine.addTransition(
-        STALE, DECOMMISSIONING,
-        NodeLifeCycleEvent.DECOMMISSION);
-    stateMachine.addTransition(
-        DEAD, DECOMMISSIONING,
-        NodeLifeCycleEvent.DECOMMISSION);
-    stateMachine.addTransition(
-        DECOMMISSIONING, DECOMMISSIONED,
-        NodeLifeCycleEvent.DECOMMISSIONED);
-
   }
 
   /**
@@ -343,12 +282,33 @@
   public void addNode(DatanodeDetails datanodeDetails,
                       LayoutVersionProto layoutInfo)
       throws NodeAlreadyExistsException {
-    nodeStateMap.addNode(datanodeDetails, stateMachine.getInitialState(),
-        layoutInfo);
+    NodeStatus newNodeStatus = newNodeStatus(datanodeDetails);
+    nodeStateMap.addNode(datanodeDetails, newNodeStatus, layoutInfo);
     eventPublisher.fireEvent(SCMEvents.NEW_NODE, datanodeDetails);
   }
 
   /**
+   * When a node registers with SCM, the operational state stored on the
+   * datanode is the source of truth. Therefore, if the datanode reports
+   * anything other than IN_SERVICE on registration, the state in SCM should be
+   * updated to reflect the datanode state.
+   * @param dn DatanodeDetails reported by the datanode
+   */
+  private NodeStatus newNodeStatus(DatanodeDetails dn) {
+    HddsProtos.NodeOperationalState dnOpState = dn.getPersistedOpState();
+    if (dnOpState != NodeOperationalState.IN_SERVICE) {
+      LOG.info("Updating nodeOperationalState on registration as the " +
+              "datanode has a persisted state of {} and expiry of {}",
+          dnOpState, dn.getPersistedOpStateExpiryEpochSec());
+      return new NodeStatus(dnOpState, nodeHealthSM.getInitialState(),
+          dn.getPersistedOpStateExpiryEpochSec());
+    } else {
+      return new NodeStatus(
+          NodeOperationalState.IN_SERVICE, nodeHealthSM.getInitialState());
+    }
+  }
+
+  /**
    * Adds a pipeline in the node2PipelineMap.
    * @param pipeline - Pipeline to be added
    */
@@ -413,62 +373,63 @@
    *
    * @throws NodeNotFoundException if the node is not present
    */
-  public NodeState getNodeState(DatanodeDetails datanodeDetails)
+  public NodeStatus getNodeStatus(DatanodeDetails datanodeDetails)
       throws NodeNotFoundException {
-    return nodeStateMap.getNodeState(datanodeDetails.getUuid());
+    return nodeStateMap.getNodeStatus(datanodeDetails.getUuid());
   }
 
   /**
-   * Returns all the node which are in healthy state.
+   * Returns all the node which are in healthy state, ignoring the operational
+   * state.
    *
    * @return list of healthy nodes
    */
   public List<DatanodeInfo> getHealthyNodes() {
-    List<DatanodeInfo> allHealthyNodes;
-    allHealthyNodes = getNodes(HEALTHY);
-    allHealthyNodes.addAll(getNodes(NodeState.HEALTHY_READONLY));
-    return allHealthyNodes;
+    return getNodes(null, HEALTHY);
   }
 
   /**
-   * Returns all the node which are in stale state.
+   * Returns all the node which are in stale state, ignoring the operational
+   * state.
    *
    * @return list of stale nodes
    */
   public List<DatanodeInfo> getStaleNodes() {
-    return getNodes(STALE);
+    return getNodes(null, NodeState.STALE);
   }
 
   /**
-   * Returns all the node which are in dead state.
+   * Returns all the node which are in dead state, ignoring the operational
+   * state.
    *
    * @return list of dead nodes
    */
   public List<DatanodeInfo> getDeadNodes() {
-    return getNodes(DEAD);
+    return getNodes(null, NodeState.DEAD);
   }
 
   /**
-   * Returns all the node which are in the specified state.
+   * Returns all the nodes with the specified status.
    *
-   * @param state NodeState
+   * @param status NodeStatus
    *
    * @return list of nodes
    */
-  public List<DatanodeInfo> getNodes(NodeState state) {
-    List<DatanodeInfo> nodes = new ArrayList<>();
-    nodeStateMap.getNodes(state).forEach(
-        uuid -> {
-          try {
-            nodes.add(nodeStateMap.getNodeInfo(uuid));
-          } catch (NodeNotFoundException e) {
-            // This should not happen unless someone else other than
-            // NodeStateManager is directly modifying NodeStateMap and removed
-            // the node entry after we got the list of UUIDs.
-            LOG.error("Inconsistent NodeStateMap! {}", nodeStateMap);
-          }
-        });
-    return nodes;
+  public List<DatanodeInfo> getNodes(NodeStatus status) {
+    return nodeStateMap.getDatanodeInfos(status);
+  }
+
+  /**
+   * Returns all the nodes with the specified operationalState and health.
+   *
+   * @param opState The operationalState of the node
+   * @param health  The node health
+   *
+   * @return list of nodes matching the passed states
+   */
+  public List<DatanodeInfo> getNodes(
+      NodeOperationalState opState, NodeState health) {
+    return nodeStateMap.getDatanodeInfos(opState, health);
   }
 
   /**
@@ -477,19 +438,52 @@
    * @return all the managed nodes
    */
   public List<DatanodeInfo> getAllNodes() {
-    List<DatanodeInfo> nodes = new ArrayList<>();
-    nodeStateMap.getAllNodes().forEach(
-        uuid -> {
-          try {
-            nodes.add(nodeStateMap.getNodeInfo(uuid));
-          } catch (NodeNotFoundException e) {
-            // This should not happen unless someone else other than
-            // NodeStateManager is directly modifying NodeStateMap and removed
-            // the node entry after we got the list of UUIDs.
-            LOG.error("Inconsistent NodeStateMap! {}", nodeStateMap);
-          }
-        });
-    return nodes;
+    return nodeStateMap.getAllDatanodeInfos();
+  }
+
+  /**
+   * Sets the operational state of the given node. Intended to be called when
+   * a node is being decommissioned etc.
+   *
+   * @param dn The datanode having its state set
+   * @param newState The new operational State of the node.
+   */
+  public void setNodeOperationalState(DatanodeDetails dn,
+      NodeOperationalState newState)  throws NodeNotFoundException {
+    setNodeOperationalState(dn, newState, 0);
+  }
+
+  /**
+   * Sets the operational state of the given node. Intended to be called when
+   * a node is being decommissioned etc.
+   *
+   * @param dn The datanode having its state set
+   * @param newState The new operational State of the node.
+   * @param stateExpiryEpochSec The number of seconds from the epoch when the
+   *                            operational state should expire. Passing zero
+   *                            indicates the state will never expire
+   */
+  public void setNodeOperationalState(DatanodeDetails dn,
+      NodeOperationalState newState,
+      long stateExpiryEpochSec)  throws NodeNotFoundException {
+    DatanodeInfo dni = nodeStateMap.getNodeInfo(dn.getUuid());
+    NodeStatus oldStatus = dni.getNodeStatus();
+    if (oldStatus.getOperationalState() != newState ||
+        oldStatus.getOpStateExpiryEpochSeconds() != stateExpiryEpochSec) {
+      nodeStateMap.updateNodeOperationalState(
+          dn.getUuid(), newState, stateExpiryEpochSec);
+      // This will trigger an event based on the nodes health when the
+      // operational state changes. Eg a node that was IN_MAINTENANCE goes
+      // to IN_SERVICE + HEALTHY. This will trigger the HEALTHY node event to
+      // create new pipelines. OTH, if the nodes goes IN_MAINTENANCE to
+      // IN_SERVICE + DEAD, it will trigger the dead node handler to remove its
+      // container replicas. Sometimes the event will do nothing, but it will
+      // not do any harm either. Eg DECOMMISSIONING -> DECOMMISSIONED + HEALTHY
+      // but the pipeline creation logic will ignore decommissioning nodes.
+      if (oldStatus.getOperationalState() != newState) {
+        fireHealthStateEvent(oldStatus.getHealth(), dn);
+      }
+    }
   }
 
   /**
@@ -502,42 +496,53 @@
   }
 
   /**
-   * Returns the count of healthy nodes.
+   * Returns the count of healthy nodes, ignoring operational state.
    *
    * @return healthy node count
    */
   public int getHealthyNodeCount() {
-    return getNodeCount(HEALTHY) +
-        getNodeCount(NodeState.HEALTHY_READONLY);
+    return getHealthyNodes().size();
   }
 
   /**
-   * Returns the count of stale nodes.
+   * Returns the count of stale nodes, ignoring operational state.
    *
    * @return stale node count
    */
   public int getStaleNodeCount() {
-    return getNodeCount(STALE);
+    return getStaleNodes().size();
   }
 
   /**
-   * Returns the count of dead nodes.
+   * Returns the count of dead nodes, ignoring operational state.
    *
    * @return dead node count
    */
   public int getDeadNodeCount() {
-    return getNodeCount(DEAD);
+    return getDeadNodes().size();
   }
 
   /**
-   * Returns the count of nodes in specified state.
+   * Returns the count of nodes in specified status.
    *
-   * @param state NodeState
+   * @param status NodeState
    *
    * @return node count
    */
-  public int getNodeCount(NodeState state) {
-    return nodeStateMap.getNodeCount(state);
+  public int getNodeCount(NodeStatus status) {
+    return nodeStateMap.getNodeCount(status);
+  }
+
+  /**
+   * Returns the count of nodes in the specified states.
+   *
+   * @param opState The operational state of the node
+   * @param health The health of the node
+   *
+   * @return node count
+   */
+  public int getNodeCount(NodeOperationalState opState, NodeState health) {
+    return nodeStateMap.getNodeCount(opState, health);
   }
 
   /**
@@ -638,10 +643,10 @@
 
   public void forceNodesToHealthyReadOnly() {
     try {
-      List<UUID> nodes = nodeStateMap.getNodes(HEALTHY);
+      List<UUID> nodes = nodeStateMap.getNodes(null, HEALTHY);
       for (UUID id : nodes) {
         DatanodeInfo node = nodeStateMap.getNodeInfo(id);
-        nodeStateMap.updateNodeState(node.getUuid(), HEALTHY,
+        nodeStateMap.updateNodeHealthState(node.getUuid(),
             HEALTHY_READONLY);
         if (state2EventMap.containsKey(HEALTHY_READONLY)) {
           eventPublisher.fireEvent(state2EventMap.get(HEALTHY_READONLY),
@@ -654,7 +659,8 @@
     }
   }
 
-  private void checkNodesHealth() {
+  @VisibleForTesting
+  public void checkNodesHealth() {
 
     /*
      *
@@ -702,49 +708,42 @@
         (layout) -> layout.getMetadataLayoutVersion() !=
             layoutVersionManager.getMetadataLayoutVersion();
     try {
-      for (NodeState state : NodeState.values()) {
-        List<UUID> nodes = nodeStateMap.getNodes(state);
-        for (UUID id : nodes) {
-          DatanodeInfo node = nodeStateMap.getNodeInfo(id);
-          switch (state) {
-          case HEALTHY:
-              // Move the node to STALE if the last heartbeat time is less than
-            // configured stale-node interval.
-            updateNodeLayoutVersionState(node, layoutMisMatchCondition, state,
-                NodeLifeCycleEvent.LAYOUT_MISMATCH);
-            updateNodeState(node, staleNodeCondition, state,
-                NodeLifeCycleEvent.TIMEOUT);
-            break;
-          case HEALTHY_READONLY:
-            // Move the node to STALE if the last heartbeat time is less than
-            // configured stale-node interval.
-            updateNodeLayoutVersionState(node, layoutMatchCondition, state,
-                NodeLifeCycleEvent.LAYOUT_MATCH);
-            updateNodeState(node, staleNodeCondition, state,
-                  NodeLifeCycleEvent.TIMEOUT);
-            break;
-          case STALE:
-            // Move the node to DEAD if the last heartbeat time is less than
-            // configured dead-node interval.
-            updateNodeState(node, deadNodeCondition, state,
-                NodeLifeCycleEvent.TIMEOUT);
-            // Restore the node if we have received heartbeat before configured
-            // stale-node interval.
-            updateNodeState(node, healthyNodeCondition, state,
-                NodeLifeCycleEvent.RESTORE);
-            break;
-          case DEAD:
-            // Resurrect the node if we have received heartbeat before
-            // configured stale-node interval.
-            updateNodeState(node, healthyNodeCondition, state,
-                NodeLifeCycleEvent.RESURRECT);
-            break;
-          // We don't do anything for DECOMMISSIONING and DECOMMISSIONED in
-          // heartbeat processing.
-          case DECOMMISSIONING:
-          case DECOMMISSIONED:
-          default:
-          }
+      for(DatanodeInfo node : nodeStateMap.getAllDatanodeInfos()) {
+        NodeStatus status = nodeStateMap.getNodeStatus(node.getUuid());
+        switch (status.getHealth()) {
+        case HEALTHY:
+          // Move the node to STALE if the last heartbeat time is less than
+          // configured stale-node interval.
+          updateNodeLayoutVersionState(node, layoutMisMatchCondition, status,
+              NodeLifeCycleEvent.LAYOUT_MISMATCH);
+          updateNodeState(node, staleNodeCondition, status,
+              NodeLifeCycleEvent.TIMEOUT);
+          break;
+        case HEALTHY_READONLY:
+          // Move the node to STALE if the last heartbeat time is less than
+          // configured stale-node interval.
+          updateNodeLayoutVersionState(node, layoutMatchCondition, status,
+              NodeLifeCycleEvent.LAYOUT_MATCH);
+          updateNodeState(node, staleNodeCondition, status,
+              NodeLifeCycleEvent.TIMEOUT);
+          break;
+        case STALE:
+          // Move the node to DEAD if the last heartbeat time is less than
+          // configured dead-node interval.
+          updateNodeState(node, deadNodeCondition, status,
+              NodeLifeCycleEvent.TIMEOUT);
+          // Restore the node if we have received heartbeat before configured
+          // stale-node interval.
+          updateNodeState(node, healthyNodeCondition, status,
+              NodeLifeCycleEvent.RESTORE);
+          break;
+        case DEAD:
+          // Resurrect the node if we have received heartbeat before
+          // configured stale-node interval.
+          updateNodeState(node, healthyNodeCondition, status,
+              NodeLifeCycleEvent.RESURRECT);
+          break;
+        default:
         }
       }
     } catch (NodeNotFoundException e) {
@@ -803,27 +802,35 @@
    *
    * @param node DatanodeInfo
    * @param condition condition to check
-   * @param state current state of node
+   * @param status current status of node
    * @param lifeCycleEvent NodeLifeCycleEvent to be applied if condition
    *                       matches
    *
    * @throws NodeNotFoundException if the node is not present
    */
   private void updateNodeState(DatanodeInfo node, Predicate<Long> condition,
-      NodeState state, NodeLifeCycleEvent lifeCycleEvent)
+      NodeStatus status, NodeLifeCycleEvent lifeCycleEvent)
       throws NodeNotFoundException {
     try {
       if (condition.test(node.getLastHeartbeatTime())) {
-        NodeState newState = stateMachine.getNextState(state, lifeCycleEvent);
-        nodeStateMap.updateNodeState(node.getUuid(), state, newState);
-        if (state2EventMap.containsKey(newState)) {
-          eventPublisher.fireEvent(state2EventMap.get(newState), node);
-        }
+        NodeState newHealthState = nodeHealthSM.
+            getNextState(status.getHealth(), lifeCycleEvent);
+        NodeStatus newStatus =
+            nodeStateMap.updateNodeHealthState(node.getUuid(), newHealthState);
+        fireHealthStateEvent(newStatus.getHealth(), node);
       }
     } catch (InvalidStateTransitionException e) {
       LOG.warn("Invalid state transition of node {}." +
               " Current state: {}, life cycle event: {}",
-          node, state, lifeCycleEvent);
+          node, status.getHealth(), lifeCycleEvent);
+    }
+  }
+
+  private void fireHealthStateEvent(HddsProtos.NodeState health,
+      DatanodeDetails node) {
+    Event<DatanodeDetails> event = state2EventMap.get(health);
+    if (event != null) {
+      eventPublisher.fireEvent(event, node);
     }
   }
 
@@ -839,21 +846,22 @@
    * @throws NodeNotFoundException if the node is not present
    */
   private void updateNodeLayoutVersionState(DatanodeInfo node,
-                             Predicate<LayoutVersionProto> condition,
-                             NodeState state, NodeLifeCycleEvent lifeCycleEvent)
+                                            Predicate<LayoutVersionProto>
+                                                condition, NodeStatus status,
+                                            NodeLifeCycleEvent lifeCycleEvent)
       throws NodeNotFoundException {
     try {
       if (condition.test(node.getLastKnownLayoutVersion())) {
-        NodeState newState = stateMachine.getNextState(state, lifeCycleEvent);
-        nodeStateMap.updateNodeState(node.getUuid(), state, newState);
-        if (state2EventMap.containsKey(newState)) {
-          eventPublisher.fireEvent(state2EventMap.get(newState), node);
-        }
+        NodeState newHealthState = nodeHealthSM.getNextState(status.getHealth(),
+            lifeCycleEvent);
+        NodeStatus newStatus =
+            nodeStateMap.updateNodeHealthState(node.getUuid(), newHealthState);
+        fireHealthStateEvent(newStatus.getHealth(), node);
       }
     } catch (InvalidStateTransitionException e) {
       LOG.warn("Invalid state transition of node {}." +
               " Current state: {}, life cycle event: {}",
-          node, state, lifeCycleEvent);
+          node, status.getHealth(), lifeCycleEvent);
     }
   }
 
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStatus.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStatus.java
new file mode 100644
index 0000000..dc0ce18
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStatus.java
@@ -0,0 +1,211 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+
+import java.util.Objects;
+
+/**
+ * This class is used to capture the current status of a datanode. This
+ * includes its health (healthy, stale or dead) and its operation status (
+ * in_service, decommissioned and maintenance mode) along with the expiry time
+ * for the operational state (used with maintenance mode).
+ */
+public class NodeStatus {
+
+  private HddsProtos.NodeOperationalState operationalState;
+  private HddsProtos.NodeState health;
+  private long opStateExpiryEpochSeconds;
+
+  public NodeStatus(HddsProtos.NodeOperationalState operationalState,
+             HddsProtos.NodeState health) {
+    this.operationalState = operationalState;
+    this.health = health;
+    this.opStateExpiryEpochSeconds = 0;
+  }
+
+  public NodeStatus(HddsProtos.NodeOperationalState operationalState,
+                    HddsProtos.NodeState health,
+                    long opStateExpireEpocSeconds) {
+    this.operationalState = operationalState;
+    this.health = health;
+    this.opStateExpiryEpochSeconds = opStateExpireEpocSeconds;
+  }
+
+  public static NodeStatus inServiceHealthy() {
+    return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+        HddsProtos.NodeState.HEALTHY);
+  }
+
+  public static NodeStatus inServiceHealthyReadOnly() {
+    return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+        HddsProtos.NodeState.HEALTHY_READONLY);
+  }
+
+  public static NodeStatus inServiceStale() {
+    return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+        HddsProtos.NodeState.STALE);
+  }
+
+  public static NodeStatus inServiceDead() {
+    return new NodeStatus(HddsProtos.NodeOperationalState.IN_SERVICE,
+        HddsProtos.NodeState.DEAD);
+  }
+
+  public HddsProtos.NodeState getHealth() {
+    return health;
+  }
+
+  public HddsProtos.NodeOperationalState getOperationalState() {
+    return operationalState;
+  }
+
+  public long getOpStateExpiryEpochSeconds() {
+    return opStateExpiryEpochSeconds;
+  }
+
+  public boolean operationalStateExpired() {
+    if (0 == opStateExpiryEpochSeconds) {
+      return false;
+    }
+    return System.currentTimeMillis() / 1000 >= opStateExpiryEpochSeconds;
+  }
+
+  /**
+   * Returns true if the nodeStatus indicates the node is in any decommission
+   * state.
+   *
+   * @return True if the node is in any decommission state, false otherwise
+   */
+  public boolean isDecommission() {
+    return operationalState == HddsProtos.NodeOperationalState.DECOMMISSIONING
+        || operationalState == HddsProtos.NodeOperationalState.DECOMMISSIONED;
+  }
+
+  /**
+   * Returns true if the node is currently decommissioning.
+   *
+   * @return True if the node is decommissioning, false otherwise
+   */
+  public boolean isDecommissioning() {
+    return operationalState == HddsProtos.NodeOperationalState.DECOMMISSIONING;
+  }
+
+  /**
+   * Returns true if the node is decommissioned.
+   *
+   * @return True if the node is decommissioned, false otherwise
+   */
+  public boolean isDecommissioned() {
+    return operationalState == HddsProtos.NodeOperationalState.DECOMMISSIONED;
+  }
+
+  /**
+   * Returns true if the node is in any maintenance state.
+   *
+   * @return True if the node is in any maintenance state, false otherwise
+   */
+  public boolean isMaintenance() {
+    return operationalState
+        == HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE
+        || operationalState == HddsProtos.NodeOperationalState.IN_MAINTENANCE;
+  }
+
+  /**
+   * Returns true if the node is currently entering maintenance.
+   *
+   * @return True if the node is entering maintenance, false otherwise
+   */
+  public boolean isEnteringMaintenance() {
+    return operationalState
+        == HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE;
+  }
+
+  /**
+   * Returns true if the node is currently in maintenance.
+   *
+   * @return True if the node is in maintenance, false otherwise.
+   */
+  public boolean isInMaintenance() {
+    return operationalState == HddsProtos.NodeOperationalState.IN_MAINTENANCE;
+  }
+
+  /**
+   * Returns true if the nodeStatus is healthy (ie not stale or dead) and false
+   * otherwise.
+   *
+   * @return True if the node is Healthy, false otherwise
+   */
+  public boolean isHealthy() {
+    return health == HddsProtos.NodeState.HEALTHY;
+  }
+
+  /**
+   * Returns true if the nodeStatus is either healthy or stale and false
+   * otherwise.
+   *
+   * @return True is the node is Healthy or Stale, false otherwise.
+   */
+  public boolean isAlive() {
+    return health == HddsProtos.NodeState.HEALTHY
+        || health == HddsProtos.NodeState.STALE;
+  }
+
+  /**
+   * Returns true if the nodeStatus is dead and false otherwise.
+   *
+   * @return True is the node is Dead, false otherwise.
+   */
+  public boolean isDead() {
+    return health == HddsProtos.NodeState.DEAD;
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+    if (this == obj) {
+      return true;
+    }
+    if (obj == null) {
+      return false;
+    }
+    if (getClass() != obj.getClass()) {
+      return false;
+    }
+    NodeStatus other = (NodeStatus) obj;
+    if (this.operationalState == other.operationalState &&
+        this.health == other.health
+        && this.opStateExpiryEpochSeconds == other.opStateExpiryEpochSeconds) {
+      return true;
+    }
+    return false;
+  }
+
+  @Override
+  public int hashCode() {
+    return Objects.hash(health, operationalState, opStateExpiryEpochSeconds);
+  }
+
+  @Override
+  public String toString() {
+    return "OperationalState: "+operationalState+" Health: "+health+
+        " OperastionStateExpiry: "+opStateExpiryEpochSeconds;
+  }
+
+}
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
index 51c84dc..0f7562c 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -17,6 +17,9 @@
  */
 package org.apache.hadoop.hdds.scm.node;
 
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY_READONLY;
+
 import javax.management.ObjectName;
 import java.io.IOException;
 import java.net.InetAddress;
@@ -36,6 +39,7 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.LayoutVersionProto;
@@ -69,11 +73,13 @@
 import org.apache.hadoop.ozone.protocol.commands.FinalizeNewLayoutVersionCommand;
 import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
 import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.ozone.protocol.commands.SetNodeOperationalStateCommand;
 import org.apache.hadoop.util.ReflectionUtils;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import com.google.common.base.Strings;
+import org.apache.hadoop.util.Time;
 import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -173,9 +179,29 @@
    * @return List of Datanodes that are known to SCM in the requested state.
    */
   @Override
-  public List<DatanodeDetails> getNodes(NodeState nodestate) {
-    return nodeStateManager.getNodes(nodestate).stream()
-        .map(node -> (DatanodeDetails) node).collect(Collectors.toList());
+  public List<DatanodeDetails> getNodes(NodeStatus nodeStatus) {
+    return nodeStateManager.getNodes(nodeStatus)
+        .stream()
+        .map(node -> (DatanodeDetails)node).collect(Collectors.toList());
+  }
+
+  /**
+   * Returns all datanode that are in the given states. Passing null for one of
+   * of the states acts like a wildcard for that state. This function works by
+   * taking a snapshot of the current collection and then returning the list
+   * from that collection. This means that real map might have changed by the
+   * time we return this list.
+   *
+   * @param opState The operational state of the node
+   * @param health The health of the node
+   * @return List of Datanodes that are known to SCM in the requested states.
+   */
+  @Override
+  public List<DatanodeDetails> getNodes(
+      NodeOperationalState opState, NodeState health) {
+    return nodeStateManager.getNodes(opState, health)
+        .stream()
+        .map(node -> (DatanodeDetails)node).collect(Collectors.toList());
   }
 
   /**
@@ -195,24 +221,60 @@
    * @return count
    */
   @Override
-  public int getNodeCount(NodeState nodestate) {
-    return nodeStateManager.getNodeCount(nodestate);
+  public int getNodeCount(NodeStatus nodeStatus) {
+    return nodeStateManager.getNodeCount(nodeStatus);
   }
 
   /**
-   * Returns the node state of a specific node.
+   * Returns the Number of Datanodes by State they are in. Passing null for
+   * either of the states acts like a wildcard for that state.
    *
-   * @param datanodeDetails Datanode Details
-   * @return Healthy/Stale/Dead/Unknown.
+   * @parem nodeOpState - The Operational State of the node
+   * @param health - The health of the node
+   * @return count
    */
   @Override
-  public NodeState getNodeState(DatanodeDetails datanodeDetails) {
-    try {
-      return nodeStateManager.getNodeState(datanodeDetails);
-    } catch (NodeNotFoundException e) {
-      // TODO: should we throw NodeNotFoundException?
-      return null;
-    }
+  public int getNodeCount(NodeOperationalState nodeOpState, NodeState health) {
+    return nodeStateManager.getNodeCount(nodeOpState, health);
+  }
+
+  /**
+   * Returns the node status of a specific node.
+   *
+   * @param datanodeDetails Datanode Details
+   * @return NodeStatus for the node
+   */
+  @Override
+  public NodeStatus getNodeStatus(DatanodeDetails datanodeDetails)
+      throws NodeNotFoundException {
+    return nodeStateManager.getNodeStatus(datanodeDetails);
+  }
+
+  /**
+   * Set the operation state of a node.
+   * @param datanodeDetails The datanode to set the new state for
+   * @param newState The new operational state for the node
+   */
+  @Override
+  public void setNodeOperationalState(DatanodeDetails datanodeDetails,
+      NodeOperationalState newState) throws NodeNotFoundException{
+    setNodeOperationalState(datanodeDetails, newState, 0);
+  }
+
+  /**
+   * Set the operation state of a node.
+   * @param datanodeDetails The datanode to set the new state for
+   * @param newState The new operational state for the node
+   * @param opStateExpiryEpocSec Seconds from the epoch when the operational
+   *                             state should end. Zero indicates the state
+   *                             never end.
+   */
+  @Override
+  public void setNodeOperationalState(DatanodeDetails datanodeDetails,
+      NodeOperationalState newState, long opStateExpiryEpocSec)
+      throws NodeNotFoundException{
+    nodeStateManager.setNodeOperationalState(
+        datanodeDetails, newState, opStateExpiryEpocSec);
   }
 
   /**
@@ -358,6 +420,7 @@
       nodeStateManager.updateLastKnownLayoutVersion(datanodeDetails,
           layoutInfo);
       metrics.incNumHBProcessed();
+      updateDatanodeOpState(datanodeDetails);
     } catch (NodeNotFoundException e) {
       metrics.incNumHBProcessingFailed();
       LOG.error("SCM trying to process heartbeat from an " +
@@ -366,6 +429,41 @@
     return commandQueue.getCommand(datanodeDetails.getUuid());
   }
 
+  /**
+   * If the operational state or expiry reported in the datanode heartbeat do
+   * not match those store in SCM, queue a command to update the state persisted
+   * on the datanode. Additionally, ensure the datanodeDetails stored in SCM
+   * match those reported in the heartbeat.
+   * This method should only be called when processing the
+   * heartbeat, and for a registered node, the information stored in SCM is the
+   * source of truth.
+   * @param reportedDn The DatanodeDetails taken from the node heartbeat.
+   * @throws NodeNotFoundException
+   */
+  private void updateDatanodeOpState(DatanodeDetails reportedDn)
+      throws NodeNotFoundException {
+    NodeStatus scmStatus = getNodeStatus(reportedDn);
+    if (scmStatus.getOperationalState() != reportedDn.getPersistedOpState()
+        || scmStatus.getOpStateExpiryEpochSeconds()
+        != reportedDn.getPersistedOpStateExpiryEpochSec()) {
+      LOG.info("Scheduling a command to update the operationalState " +
+          "persisted on the datanode as the reported value ({}, {}) does not " +
+          "match the value stored in SCM ({}, {})",
+          reportedDn.getPersistedOpState(),
+          reportedDn.getPersistedOpStateExpiryEpochSec(),
+          scmStatus.getOperationalState(),
+          scmStatus.getOpStateExpiryEpochSeconds());
+      commandQueue.addCommand(reportedDn.getUuid(),
+          new SetNodeOperationalStateCommand(
+              Time.monotonicNow(), scmStatus.getOperationalState(),
+              scmStatus.getOpStateExpiryEpochSeconds()));
+    }
+    DatanodeDetails scmDnd = nodeStateManager.getNode(reportedDn);
+    scmDnd.setPersistedOpStateExpiryEpochSec(
+        reportedDn.getPersistedOpStateExpiryEpochSec());
+    scmDnd.setPersistedOpState(reportedDn.getPersistedOpState());
+  }
+
   @Override
   public Boolean isNodeRegistered(DatanodeDetails datanodeDetails) {
     try {
@@ -492,11 +590,11 @@
     final Map<DatanodeDetails, SCMNodeStat> nodeStats = new HashMap<>();
 
     final List<DatanodeInfo> healthyNodes = nodeStateManager
-        .getNodes(NodeState.HEALTHY);
+        .getNodes(null, HEALTHY);
     final List<DatanodeInfo> healthyReadOnlyNodes = nodeStateManager
-        .getNodes(NodeState.HEALTHY_READONLY);
+        .getNodes(null, HEALTHY_READONLY);
     final List<DatanodeInfo> staleNodes = nodeStateManager
-        .getNodes(NodeState.STALE);
+        .getStaleNodes();
     final List<DatanodeInfo> datanodes = new ArrayList<>(healthyNodes);
     datanodes.addAll(healthyReadOnlyNodes);
     datanodes.addAll(staleNodes);
@@ -546,66 +644,99 @@
     }
   }
 
-  @Override
-  public Map<String, Integer> getNodeCount() {
-    Map<String, Integer> nodeCountMap = new HashMap<String, Integer>();
-    for (NodeState state : NodeState.values()) {
-      nodeCountMap.put(state.toString(), getNodeCount(state));
+  @Override // NodeManagerMXBean
+  public Map<String, Map<String, Integer>> getNodeCount() {
+    Map<String, Map<String, Integer>> nodes = new HashMap<>();
+    for (NodeOperationalState opState : NodeOperationalState.values()) {
+      Map<String, Integer> states = new HashMap<>();
+      for (NodeState health : NodeState.values()) {
+        states.put(health.name(), 0);
+      }
+      nodes.put(opState.name(), states);
     }
-    return nodeCountMap;
+    for (DatanodeInfo dni : nodeStateManager.getAllNodes()) {
+      NodeStatus status = dni.getNodeStatus();
+      nodes.get(status.getOperationalState().name())
+          .compute(status.getHealth().name(), (k, v) -> v+1);
+    }
+    return nodes;
   }
 
   // We should introduce DISK, SSD, etc., notion in
   // SCMNodeStat and try to use it.
-  @Override
+  @Override // NodeManagerMXBean
   public Map<String, Long> getNodeInfo() {
-    long diskCapacity = 0L;
-    long diskUsed = 0L;
-    long diskRemaning = 0L;
-
-    long ssdCapacity = 0L;
-    long ssdUsed = 0L;
-    long ssdRemaining = 0L;
-
-    List<DatanodeInfo> healthyNodes = nodeStateManager
-        .getNodes(NodeState.HEALTHY);
-    List<DatanodeInfo> healthyReadOnlyNodes = nodeStateManager
-        .getNodes(NodeState.HEALTHY_READONLY);
-    List<DatanodeInfo> staleNodes = nodeStateManager
-        .getNodes(NodeState.STALE);
-
-    List<DatanodeInfo> datanodes = new ArrayList<>(healthyNodes);
-    datanodes.addAll(healthyReadOnlyNodes);
-    datanodes.addAll(staleNodes);
-
-    for (DatanodeInfo dnInfo : datanodes) {
-      List<StorageReportProto> storageReportProtos = dnInfo.getStorageReports();
-      for (StorageReportProto reportProto : storageReportProtos) {
-        if (reportProto.getStorageType() ==
-            StorageContainerDatanodeProtocolProtos.StorageTypeProto.DISK) {
-          diskCapacity += reportProto.getCapacity();
-          diskRemaning += reportProto.getRemaining();
-          diskUsed += reportProto.getScmUsed();
-        } else if (reportProto.getStorageType() ==
-            StorageContainerDatanodeProtocolProtos.StorageTypeProto.SSD) {
-          ssdCapacity += reportProto.getCapacity();
-          ssdRemaining += reportProto.getRemaining();
-          ssdUsed += reportProto.getScmUsed();
-        }
+    Map<String, Long> nodeInfo = new HashMap<>();
+    // Compute all the possible stats from the enums, and default to zero:
+    for (UsageStates s : UsageStates.values()) {
+      for (UsageMetrics stat : UsageMetrics.values()) {
+        nodeInfo.put(s.label + stat.name(), 0L);
       }
     }
 
-    Map<String, Long> nodeInfo = new HashMap<>();
-    nodeInfo.put("DISKCapacity", diskCapacity);
-    nodeInfo.put("DISKUsed", diskUsed);
-    nodeInfo.put("DISKRemaining", diskRemaning);
-
-    nodeInfo.put("SSDCapacity", ssdCapacity);
-    nodeInfo.put("SSDUsed", ssdUsed);
-    nodeInfo.put("SSDRemaining", ssdRemaining);
+    for (DatanodeInfo node : nodeStateManager.getAllNodes()) {
+      String keyPrefix = "";
+      NodeStatus status = node.getNodeStatus();
+      if (status.isMaintenance()) {
+        keyPrefix = UsageStates.MAINT.getLabel();
+      } else if (status.isDecommission()) {
+        keyPrefix = UsageStates.DECOM.getLabel();
+      } else if (status.isAlive()) {
+        // Inservice but not dead
+        keyPrefix = UsageStates.ONLINE.getLabel();
+      } else {
+        // dead inservice node, skip it
+        continue;
+      }
+      List<StorageReportProto> storageReportProtos = node.getStorageReports();
+      for (StorageReportProto reportProto : storageReportProtos) {
+        if (reportProto.getStorageType() ==
+            StorageContainerDatanodeProtocolProtos.StorageTypeProto.DISK) {
+          nodeInfo.compute(keyPrefix + UsageMetrics.DiskCapacity.name(),
+              (k, v) -> v + reportProto.getCapacity());
+          nodeInfo.compute(keyPrefix + UsageMetrics.DiskRemaining.name(),
+              (k, v) -> v + reportProto.getRemaining());
+          nodeInfo.compute(keyPrefix + UsageMetrics.DiskUsed.name(),
+              (k, v) -> v + reportProto.getScmUsed());
+        } else if (reportProto.getStorageType() ==
+            StorageContainerDatanodeProtocolProtos.StorageTypeProto.SSD) {
+          nodeInfo.compute(keyPrefix + UsageMetrics.SSDCapacity.name(),
+              (k, v) -> v + reportProto.getCapacity());
+          nodeInfo.compute(keyPrefix + UsageMetrics.SSDRemaining.name(),
+              (k, v) -> v + reportProto.getRemaining());
+          nodeInfo.compute(keyPrefix + UsageMetrics.SSDUsed.name(),
+              (k, v) -> v + reportProto.getScmUsed());
+        }
+      }
+    }
     return nodeInfo;
   }
 
+  private enum UsageMetrics {
+    DiskCapacity,
+    DiskUsed,
+    DiskRemaining,
+    SSDCapacity,
+    SSDUsed,
+    SSDRemaining
+  }
+
+  private enum UsageStates {
+    ONLINE(""),
+    MAINT("Maintenance"),
+    DECOM("Decommissioned");
+
+    private final String label;
+
+    public String getLabel() {
+      return label;
+    }
+
+    UsageStates(String label) {
+      this.label = label;
+    }
+  }
+
   /**
    * Returns the min of no healthy volumes reported out of the set
    * of datanodes constituting the pipeline.
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeMetrics.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeMetrics.java
index 111c546..f265d56 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeMetrics.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeMetrics.java
@@ -23,6 +23,7 @@
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
 import org.apache.hadoop.metrics2.MetricsCollector;
 import org.apache.hadoop.metrics2.MetricsInfo;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.MetricsSource;
 import org.apache.hadoop.metrics2.MetricsSystem;
 import org.apache.hadoop.metrics2.annotation.Metric;
@@ -33,12 +34,7 @@
 import org.apache.hadoop.metrics2.lib.MutableCounterLong;
 import org.apache.hadoop.ozone.OzoneConsts;
 
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONED;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONING;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY_READONLY;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * This class maintains Node related metrics.
@@ -54,6 +50,7 @@
   private @Metric MutableCounterLong numHBProcessingFailed;
   private @Metric MutableCounterLong numNodeReportProcessed;
   private @Metric MutableCounterLong numNodeReportProcessingFailed;
+  private @Metric String textMetric;
 
   private final MetricsRegistry registry;
   private final NodeManagerMXBean managerMXBean;
@@ -64,6 +61,7 @@
   private SCMNodeMetrics(NodeManagerMXBean managerMXBean) {
     this.managerMXBean = managerMXBean;
     this.registry = new MetricsRegistry(recordInfo);
+    this.textMetric = "my_test_metric";
   }
 
   /**
@@ -119,48 +117,57 @@
   @Override
   @SuppressWarnings("SuspiciousMethodCalls")
   public void getMetrics(MetricsCollector collector, boolean all) {
-    Map<String, Integer> nodeCount = managerMXBean.getNodeCount();
+    Map<String, Map<String, Integer>> nodeCount = managerMXBean.getNodeCount();
     Map<String, Long> nodeInfo = managerMXBean.getNodeInfo();
-    registry.snapshot(
-        collector.addRecord(registry.info()) // Add annotated ones first
-            .addGauge(Interns.info(
-                "HealthyNodes",
-                "Number of healthy datanodes"),
-                nodeCount.get(HEALTHY.toString()))
-            .addGauge(Interns.info(
-                "HealthyReadOnlyNodes",
-                "Number of healthy and read only datanodes"),
-                nodeCount.get(HEALTHY_READONLY.toString()))
-            .addGauge(Interns.info("StaleNodes",
-                "Number of stale datanodes"),
-                nodeCount.get(STALE.toString()))
-            .addGauge(Interns.info("DeadNodes",
-                "Number of dead datanodes"),
-                nodeCount.get(DEAD.toString()))
-            .addGauge(Interns.info("DecommissioningNodes",
-                "Number of decommissioning datanodes"),
-                nodeCount.get(DECOMMISSIONING.toString()))
-            .addGauge(Interns.info("DecommissionedNodes",
-                "Number of decommissioned datanodes"),
-                nodeCount.get(DECOMMISSIONED.toString()))
-            .addGauge(Interns.info("DiskCapacity",
-                "Total disk capacity"),
-                nodeInfo.get("DISKCapacity"))
-            .addGauge(Interns.info("DiskUsed",
-                "Total disk capacity used"),
-                nodeInfo.get("DISKUsed"))
-            .addGauge(Interns.info("DiskRemaining",
-                "Total disk capacity remaining"),
-                nodeInfo.get("DISKRemaining"))
-            .addGauge(Interns.info("SSDCapacity",
-                "Total ssd capacity"),
-                nodeInfo.get("SSDCapacity"))
-            .addGauge(Interns.info("SSDUsed",
-                "Total ssd capacity used"),
-                nodeInfo.get("SSDUsed"))
-            .addGauge(Interns.info("SSDRemaining",
-                "Total disk capacity remaining"),
-                nodeInfo.get("SSDRemaining")),
-        all);
+    /**
+     * Loop over the Node map and create a metric for the cross product of all
+     * Operational and health states, ie:
+     *     InServiceHealthy
+     *     InServiceStale
+     *     ...
+     *     EnteringMaintenanceHealthy
+     *     ...
+     */
+    MetricsRecordBuilder metrics = collector.addRecord(registry.info());
+    for(Map.Entry<String, Map<String, Integer>> e : nodeCount.entrySet()) {
+      for(Map.Entry<String, Integer> h : e.getValue().entrySet()) {
+        metrics.addGauge(
+            Interns.info(
+                StringUtils.camelize(e.getKey()+"_"+h.getKey()+"_nodes"),
+                "Number of "+e.getKey()+" "+h.getKey()+" datanodes"),
+            h.getValue());
+      }
+    }
+
+    for (Map.Entry<String, Long> e : nodeInfo.entrySet()) {
+      metrics.addGauge(
+          Interns.info(e.getKey(), diskMetricDescription(e.getKey())),
+          e.getValue());
+    }
+    registry.snapshot(metrics, all);
+  }
+
+  private String diskMetricDescription(String metric) {
+    StringBuilder sb = new StringBuilder();
+    sb.append("Total");
+    if (metric.indexOf("Maintenance") >= 0) {
+      sb.append(" maintenance");
+    } else if (metric.indexOf("Decommissioned") >= 0) {
+      sb.append(" decommissioned");
+    }
+    if (metric.indexOf("DiskCapacity") >= 0) {
+      sb.append(" disk capacity");
+    } else if (metric.indexOf("DiskUsed") >= 0) {
+      sb.append(" disk capacity used");
+    } else if (metric.indexOf("DiskRemaining") >= 0) {
+      sb.append(" disk capacity remaining");
+    } else if (metric.indexOf("SSDCapacity") >= 0) {
+      sb.append(" SSD capacity");
+    } else if (metric.indexOf("SSDUsed") >= 0) {
+      sb.append(" SSD capacity used");
+    } else if (metric.indexOf("SSDRemaining") >= 0) {
+      sb.append(" SSD capacity remaining");
+    }
+    return sb.toString();
   }
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StartDatanodeAdminHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StartDatanodeAdminHandler.java
new file mode 100644
index 0000000..9418a7a
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StartDatanodeAdminHandler.java
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Set;
+
+/**
+ * Handler which is fired when a datanode starts admin (decommission or
+ * maintenance).
+ */
+public class StartDatanodeAdminHandler
+    implements EventHandler<DatanodeDetails> {
+  private static final Logger LOG =
+      LoggerFactory.getLogger(StartDatanodeAdminHandler.class);
+
+  private final NodeManager nodeManager;
+  private final PipelineManager pipelineManager;
+
+  public StartDatanodeAdminHandler(NodeManager nodeManager,
+      PipelineManager pipelineManager) {
+    this.nodeManager = nodeManager;
+    this.pipelineManager = pipelineManager;
+  }
+
+  @Override
+  public void onMessage(DatanodeDetails datanodeDetails,
+                        EventPublisher publisher) {
+    Set<PipelineID> pipelineIds =
+        nodeManager.getPipelines(datanodeDetails);
+    LOG.info("Admin start on datanode {}. Finalizing its pipelines {}",
+        datanodeDetails, pipelineIds);
+    for (PipelineID pipelineID : pipelineIds) {
+      try {
+        Pipeline pipeline = pipelineManager.getPipeline(pipelineID);
+        pipelineManager.finalizeAndDestroyPipeline(pipeline, false);
+      } catch (IOException e) {
+        LOG.info("Could not finalize pipeline={} for dn={}", pipelineID,
+            datanodeDetails);
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/NodeStateMap.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/NodeStateMap.java
index 3494b03..0a3e137 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/NodeStateMap.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/NodeStateMap.java
@@ -18,17 +18,25 @@
 
 package org.apache.hadoop.hdds.scm.node.states;
 
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.locks.ReadWriteLock;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+import java.util.stream.Collectors;
+
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.LayoutVersionProto;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.node.DatanodeInfo;
-
-import java.util.*;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.locks.ReadWriteLock;
-import java.util.concurrent.locks.ReentrantReadWriteLock;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 
 /**
  * Maintains the state of datanodes in SCM. This class should only be used by
@@ -37,16 +45,11 @@
  * this class.
  */
 public class NodeStateMap {
-
   /**
    * Node id to node info map.
    */
   private final ConcurrentHashMap<UUID, DatanodeInfo> nodeMap;
   /**
-   * Represents the current state of node.
-   */
-  private final ConcurrentHashMap<NodeState, Set<UUID>> stateMap;
-  /**
    * Node to set of containers on the node.
    */
   private final ConcurrentHashMap<UUID, Set<ContainerID>> nodeToContainer;
@@ -59,31 +62,21 @@
   public NodeStateMap() {
     lock = new ReentrantReadWriteLock();
     nodeMap = new ConcurrentHashMap<>();
-    stateMap = new ConcurrentHashMap<>();
     nodeToContainer = new ConcurrentHashMap<>();
-    initStateMap();
-  }
-
-  /**
-   * Initializes the state map with available states.
-   */
-  private void initStateMap() {
-    for (NodeState state : NodeState.values()) {
-      stateMap.put(state, ConcurrentHashMap.newKeySet());
-    }
   }
 
   /**
    * Adds a node to NodeStateMap.
    *
    * @param datanodeDetails DatanodeDetails
-   * @param nodeState initial NodeState
+   * @param nodeStatus initial NodeStatus
    * @param layoutInfo initial LayoutVersionProto
    *
    * @throws NodeAlreadyExistsException if the node already exist
    */
-  public void addNode(DatanodeDetails datanodeDetails, NodeState nodeState,
+  public void addNode(DatanodeDetails datanodeDetails, NodeStatus nodeStatus,
                       LayoutVersionProto layoutInfo)
+
       throws NodeAlreadyExistsException {
     lock.writeLock().lock();
     try {
@@ -91,34 +84,56 @@
       if (nodeMap.containsKey(id)) {
         throw new NodeAlreadyExistsException("Node UUID: " + id);
       }
-      nodeMap.put(id, new DatanodeInfo(datanodeDetails, layoutInfo));
-      nodeToContainer.put(id, ConcurrentHashMap.newKeySet());
-      stateMap.get(nodeState).add(id);
+      nodeMap.put(id, new DatanodeInfo(datanodeDetails, nodeStatus,
+          layoutInfo));
+      nodeToContainer.put(id, new HashSet<>());
     } finally {
       lock.writeLock().unlock();
     }
   }
 
   /**
-   * Updates the node state.
+   * Updates the node health state.
    *
    * @param nodeId Node Id
-   * @param currentState current state
-   * @param newState new state
+   * @param newHealth new health state
    *
    * @throws NodeNotFoundException if the node is not present
    */
-  public void updateNodeState(UUID nodeId, NodeState currentState,
-                              NodeState newState)throws NodeNotFoundException {
-    lock.writeLock().lock();
+  public NodeStatus updateNodeHealthState(UUID nodeId, NodeState newHealth)
+      throws NodeNotFoundException {
     try {
-      checkIfNodeExist(nodeId);
-      if (stateMap.get(currentState).remove(nodeId)) {
-        stateMap.get(newState).add(nodeId);
-      } else {
-        throw new NodeNotFoundException("Node UUID: " + nodeId +
-            ", not found in state: " + currentState);
-      }
+      lock.writeLock().lock();
+      DatanodeInfo dn = getNodeInfo(nodeId);
+      NodeStatus oldStatus = dn.getNodeStatus();
+      NodeStatus newStatus = new NodeStatus(
+          oldStatus.getOperationalState(), newHealth);
+      dn.setNodeStatus(newStatus);
+      return newStatus;
+    } finally {
+      lock.writeLock().unlock();
+    }
+  }
+
+  /**
+   * Updates the node operational state.
+   *
+   * @param nodeId Node Id
+   * @param newOpState new operational state
+   *
+   * @throws NodeNotFoundException if the node is not present
+   */
+  public NodeStatus updateNodeOperationalState(UUID nodeId,
+      NodeOperationalState newOpState, long opStateExpiryEpochSeconds)
+      throws NodeNotFoundException {
+    try {
+      lock.writeLock().lock();
+      DatanodeInfo dn = getNodeInfo(nodeId);
+      NodeStatus oldStatus = dn.getNodeStatus();
+      NodeStatus newStatus = new NodeStatus(
+          newOpState, oldStatus.getHealth(), opStateExpiryEpochSeconds);
+      dn.setNodeStatus(newStatus);
+      return newStatus;
     } finally {
       lock.writeLock().unlock();
     }
@@ -143,21 +158,38 @@
     }
   }
 
-
   /**
    * Returns the list of node ids which are in the specified state.
    *
-   * @param state NodeState
+   * @param status NodeStatus
    *
    * @return list of node ids
    */
-  public List<UUID> getNodes(NodeState state) {
-    lock.readLock().lock();
-    try {
-      return new ArrayList<>(stateMap.get(state));
-    } finally {
-      lock.readLock().unlock();
+  public List<UUID> getNodes(NodeStatus status) {
+    ArrayList<UUID> nodes = new ArrayList<>();
+    for (DatanodeInfo dn : filterNodes(status)) {
+      nodes.add(dn.getUuid());
     }
+    return nodes;
+  }
+
+  /**
+   * Returns the list of node ids which match the desired operational state
+   * and health. Passing a null for either value is equivalent to a wild card.
+   *
+   * Therefore, passing opState = null, health=stale will return all stale nodes
+   * regardless of their operational state.
+   *
+   * @param opState
+   * @param health
+   * @return The list of nodes matching the given states
+   */
+  public List<UUID> getNodes(NodeOperationalState opState, NodeState health) {
+    ArrayList<UUID> nodes = new ArrayList<>();
+    for (DatanodeInfo dn : filterNodes(opState, health)) {
+      nodes.add(dn.getUuid());
+    }
+    return nodes;
   }
 
   /**
@@ -166,8 +198,8 @@
    * @return list of all the node ids
    */
   public List<UUID> getAllNodes() {
-    lock.readLock().lock();
     try {
+      lock.readLock().lock();
       return new ArrayList<>(nodeMap.keySet());
     } finally {
       lock.readLock().unlock();
@@ -175,22 +207,72 @@
   }
 
   /**
-   * Returns the count of nodes in the specified state.
+   * Returns the list of all the nodes as DatanodeInfo objects.
    *
-   * @param state NodeState
-   *
-   * @return Number of nodes in the specified state
+   * @return list of all the node ids
    */
-  public int getNodeCount(NodeState state) {
-    lock.readLock().lock();
+  public List<DatanodeInfo> getAllDatanodeInfos() {
     try {
-      return stateMap.get(state).size();
+      lock.readLock().lock();
+      return new ArrayList<>(nodeMap.values());
     } finally {
       lock.readLock().unlock();
     }
   }
 
   /**
+   * Returns a list of the nodes as DatanodeInfo objects matching the passed
+   * status.
+   *
+   * @param status - The status of the nodes to return
+   * @return List of DatanodeInfo for the matching nodes
+   */
+  public List<DatanodeInfo> getDatanodeInfos(NodeStatus status) {
+    return filterNodes(status);
+  }
+
+  /**
+   * Returns a list of the nodes as DatanodeInfo objects matching the passed
+   * states. Passing null for either of the state values acts as a wildcard
+   * for that state.
+   *
+   * @param opState - The node operational state
+   * @param health - The node health
+   * @return List of DatanodeInfo for the matching nodes
+   */
+  public List<DatanodeInfo> getDatanodeInfos(
+      NodeOperationalState opState, NodeState health) {
+    return filterNodes(opState, health);
+  }
+
+  /**
+   * Returns the count of nodes in the specified state.
+   *
+   * @param state NodeStatus
+   *
+   * @return Number of nodes in the specified state
+   */
+  public int getNodeCount(NodeStatus state) {
+    return getNodes(state).size();
+  }
+
+  /**
+   * Returns the count of node ids which match the desired operational state
+   * and health. Passing a null for either value is equivalent to a wild card.
+   *
+   * Therefore, passing opState=null, health=stale will count all stale nodes
+   * regardless of their operational state.
+   *
+   * @param opState
+   * @param health
+   *
+   * @return Number of nodes in the specified state
+   */
+  public int getNodeCount(NodeOperationalState opState, NodeState health) {
+    return getNodes(opState, health).size();
+  }
+
+  /**
    * Returns the total node count.
    *
    * @return node count
@@ -213,17 +295,15 @@
    *
    * @throws NodeNotFoundException if the node is not found
    */
-  public NodeState getNodeState(UUID uuid) throws NodeNotFoundException {
+  public NodeStatus getNodeStatus(UUID uuid) throws NodeNotFoundException {
     lock.readLock().lock();
     try {
-      checkIfNodeExist(uuid);
-      for (Map.Entry<NodeState, Set<UUID>> entry : stateMap.entrySet()) {
-        if (entry.getValue().contains(uuid)) {
-          return entry.getKey();
-        }
+      DatanodeInfo dn = nodeMap.get(uuid);
+      if (dn == null) {
+        throw new NodeNotFoundException("Node not found in node map." +
+            " UUID: " + uuid);
       }
-      throw new NodeNotFoundException("Node not found in node state map." +
-          " UUID: " + uuid);
+      return dn.getNodeStatus();
     } finally {
       lock.readLock().unlock();
     }
@@ -265,7 +345,8 @@
     lock.readLock().lock();
     try {
       checkIfNodeExist(uuid);
-      return Collections.unmodifiableSet(nodeToContainer.get(uuid));
+      return Collections
+          .unmodifiableSet(new HashSet<>(nodeToContainer.get(uuid)));
     } finally {
       lock.readLock().unlock();
     }
@@ -293,12 +374,13 @@
    */
   @Override
   public String toString() {
+    // TODO - fix this method to include the commented out values
     StringBuilder builder = new StringBuilder();
     builder.append("Total number of nodes: ").append(getTotalNodeCount());
-    for (NodeState state : NodeState.values()) {
-      builder.append("Number of nodes in ").append(state).append(" state: ")
-          .append(getNodeCount(state));
-    }
+   // for (NodeState state : NodeState.values()) {
+   //   builder.append("Number of nodes in ").append(state).append(" state: ")
+   //       .append(getNodeCount(state));
+   // }
     return builder.toString();
   }
 
@@ -313,4 +395,50 @@
       throw new NodeNotFoundException("Node UUID: " + uuid);
     }
   }
+
+  /**
+   * Create a list of datanodeInfo for all nodes matching the passed states.
+   * Passing null for one of the states acts like a wildcard for that state.
+   *
+   * @param opState
+   * @param health
+   * @return List of DatanodeInfo objects matching the passed state
+   */
+  private List<DatanodeInfo> filterNodes(
+      NodeOperationalState opState, NodeState health) {
+    if (opState != null && health != null) {
+      return filterNodes(new NodeStatus(opState, health));
+    }
+    if (opState == null && health == null) {
+      return getAllDatanodeInfos();
+    }
+    try {
+      lock.readLock().lock();
+      return nodeMap.values().stream()
+          .filter(n -> opState == null
+              || n.getNodeStatus().getOperationalState() == opState)
+          .filter(n -> health == null
+              || n.getNodeStatus().getHealth() == health)
+          .collect(Collectors.toList());
+    } finally {
+      lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Create a list of datanodeInfo for all nodes matching the passsed status.
+   *
+   * @param status
+   * @return List of DatanodeInfo objects matching the passed state
+   */
+  private List<DatanodeInfo> filterNodes(NodeStatus status) {
+    try {
+      lock.readLock().lock();
+      return nodeMap.values().stream()
+          .filter(n -> n.getNodeStatus().equals(status))
+          .collect(Collectors.toList());
+    }  finally {
+      lock.readLock().unlock();
+    }
+  }
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
index b9441be..4d699a6 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java
@@ -28,6 +28,7 @@
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.net.NetworkTopology;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -96,9 +97,12 @@
         continue;
       }
       if (pipeline != null &&
+            // single node pipeline are not accounted for while determining
+            // the pipeline limit for dn
+            pipeline.getType() == HddsProtos.ReplicationType.RATIS &&
+            (pipeline.getFactor() == HddsProtos.ReplicationFactor.ONE ||
           pipeline.getFactor().getNumber() == nodesRequired &&
-          pipeline.getType() == HddsProtos.ReplicationType.RATIS &&
-          pipeline.getPipelineState() == Pipeline.PipelineState.CLOSED) {
+          pipeline.getPipelineState() == Pipeline.PipelineState.CLOSED)) {
         pipelineNumDeductable++;
       }
     }
@@ -123,7 +127,7 @@
       throws SCMException {
     // get nodes in HEALTHY state
     List<DatanodeDetails> healthyNodes =
-        nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+        nodeManager.getNodes(NodeStatus.inServiceHealthy());
     boolean multipleRacks = multipleRacksAvailable(healthyNodes);
     if (excludedNodes != null) {
       healthyNodes.removeAll(excludedNodes);
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineProvider.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineProvider.java
index 533f77e..8df976c 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineProvider.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineProvider.java
@@ -25,11 +25,11 @@
 import java.util.stream.Collectors;
 
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 
 /**
  * Interface for creating pipelines.
@@ -79,7 +79,7 @@
 
     // Get list of healthy nodes
     List<DatanodeDetails> dns = nodeManager
-        .getNodes(HddsProtos.NodeState.HEALTHY)
+        .getNodes(NodeStatus.inServiceHealthy())
         .parallelStream()
         .filter(dn -> !dnsUsed.contains(dn))
         .limit(factor.getNumber())
@@ -89,7 +89,7 @@
           .format("Cannot create pipeline of factor %d using %d nodes." +
                   " Used %d nodes. Healthy nodes %d", factor.getNumber(),
               dns.size(), dnsUsed.size(),
-              nodeManager.getNodes(HddsProtos.NodeState.HEALTHY).size());
+              nodeManager.getNodes(NodeStatus.inServiceHealthy()).size());
       throw new SCMException(e,
           SCMException.ResultCodes.FAILED_TO_FIND_SUITABLE_NODE);
     }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
index 830db18..cd468bc 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
@@ -23,13 +23,13 @@
 
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.events.SCMEvents;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline.PipelineState;
 import org.apache.hadoop.hdds.scm.pipeline.leader.choose.algorithms.LeaderChoosePolicy;
 import org.apache.hadoop.hdds.scm.pipeline.leader.choose.algorithms.LeaderChoosePolicyFactory;
@@ -91,7 +91,7 @@
           ReplicationType.RATIS, factor).size() -
           getPipelineStateManager().getPipelines(ReplicationType.RATIS, factor,
               PipelineState.CLOSED).size()) > maxPipelinePerDatanode *
-          getNodeManager().getNodeCount(HddsProtos.NodeState.HEALTHY) /
+          getNodeManager().getNodeCount(NodeStatus.inServiceHealthy()) /
           factor.getNumber();
     }
 
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
index c7b6305..f1e6c1b 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
@@ -42,7 +42,6 @@
   public Pipeline create(ReplicationFactor factor) throws IOException {
     List<DatanodeDetails> dns = pickNodesNeverUsed(ReplicationType.STAND_ALONE,
         factor);
-
     if (dns.size() < factor.getNumber()) {
       String e = String
           .format("Cannot create pipeline of factor %d using %d nodes.",
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/SCMSecurityProtocolServerSideTranslatorPB.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/SCMSecurityProtocolServerSideTranslatorPB.java
index 9388a33..3f405dc 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/SCMSecurityProtocolServerSideTranslatorPB.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/SCMSecurityProtocolServerSideTranslatorPB.java
@@ -17,6 +17,7 @@
 package org.apache.hadoop.hdds.scm.protocol;
 
 import java.io.IOException;
+import java.util.List;
 
 import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos;
@@ -25,6 +26,8 @@
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMGetCertificateRequestProto;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMGetDataNodeCertRequestProto;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMGetOMCertRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMListCertificateRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMListCertificateResponseProto;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMSecurityRequest;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.SCMSecurityResponse;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos.Status;
@@ -102,6 +105,13 @@
             .setGetCertResponseProto(
                 getDataNodeCertificate(request.getGetDataNodeCertRequest()))
             .build();
+      case ListCertificate:
+        return SCMSecurityResponse.newBuilder()
+            .setCmdType(request.getCmdType())
+            .setStatus(Status.OK)
+            .setListCertificateResponseProto(
+                listCertificate(request.getListCertificateRequest()))
+            .build();
       default:
         throw new IllegalArgumentException(
             "Unknown request type: " + request.getCmdType());
@@ -184,4 +194,19 @@
 
   }
 
+  public SCMListCertificateResponseProto listCertificate(
+      SCMListCertificateRequestProto request) throws IOException {
+    List<String> certs = impl.listCertificate(request.getRole(),
+        request.getStartCertId(), request.getCount(), request.getIsRevoked());
+
+    SCMListCertificateResponseProto.Builder builder =
+        SCMListCertificateResponseProto
+            .newBuilder()
+            .setResponseCode(SCMListCertificateResponseProto
+                .ResponseCode.success)
+            .addAllCertificates(certs);
+    return builder.build();
+
+
+  }
 }
\ No newline at end of file
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocolServerSideTranslatorPB.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocolServerSideTranslatorPB.java
index 9285555..d8d3936 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocolServerSideTranslatorPB.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocolServerSideTranslatorPB.java
@@ -70,6 +70,12 @@
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StartReplicationManagerResponseProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StopReplicationManagerRequestProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StopReplicationManagerResponseProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.DecommissionNodesRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.DecommissionNodesResponseProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.RecommissionNodesRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.RecommissionNodesResponseProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StartMaintenanceNodesRequestProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StartMaintenanceNodesResponseProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetSafeModeRuleStatusesRequestProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.GetSafeModeRuleStatusesResponseProto;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.SafeModeRuleStatusProto;
@@ -136,7 +142,7 @@
             request.getTraceID());
   }
 
-  @SuppressWarnings("methodlength")
+  @SuppressWarnings("checkstyle:methodlength")
   public ScmContainerLocationResponse processRequest(
       ScmContainerLocationRequest request) throws ServiceException {
     try {
@@ -294,6 +300,27 @@
                 getQueryUpgradeFinalizationProgress(
                 request.getQueryUpgradeFinalizationProgressRequest()))
             .build();
+      case DecommissionNodes:
+        return ScmContainerLocationResponse.newBuilder()
+            .setCmdType(request.getCmdType())
+            .setStatus(Status.OK)
+            .setDecommissionNodesResponse(decommissionNodes(
+                request.getDecommissionNodesRequest()))
+            .build();
+      case RecommissionNodes:
+        return ScmContainerLocationResponse.newBuilder()
+            .setCmdType(request.getCmdType())
+            .setStatus(Status.OK)
+            .setRecommissionNodesResponse(recommissionNodes(
+                request.getRecommissionNodesRequest()))
+            .build();
+      case StartMaintenanceNodes:
+        return ScmContainerLocationResponse.newBuilder()
+            .setCmdType(request.getCmdType())
+            .setStatus(Status.OK)
+            .setStartMaintenanceNodesResponse(startMaintenanceNodes(
+                request.getStartMaintenanceNodesRequest()))
+          .build();
       default:
         throw new IllegalArgumentException(
             "Unknown command type: " + request.getCmdType());
@@ -380,13 +407,19 @@
       StorageContainerLocationProtocolProtos.NodeQueryRequestProto request)
       throws IOException {
 
-    HddsProtos.NodeState nodeState = request.getState();
-    List<HddsProtos.Node> datanodes = impl.queryNode(nodeState,
+    HddsProtos.NodeOperationalState opState = null;
+    HddsProtos.NodeState nodeState = null;
+    if (request.hasState()) {
+      nodeState = request.getState();
+    }
+    if (request.hasOpState()) {
+      opState = request.getOpState();
+    }
+    List<HddsProtos.Node> datanodes = impl.queryNode(opState, nodeState,
         request.getScope(), request.getPoolName());
     return NodeQueryResponseProto.newBuilder()
         .addAllDatanodes(datanodes)
         .build();
-
   }
 
   public SCMCloseContainerResponseProto closeContainer(
@@ -562,4 +595,25 @@
         .setIsRunning(impl.getReplicationManagerStatus()).build();
   }
 
+  public DecommissionNodesResponseProto decommissionNodes(
+      DecommissionNodesRequestProto request) throws IOException {
+    impl.decommissionNodes(request.getHostsList());
+    return DecommissionNodesResponseProto.newBuilder()
+        .build();
+  }
+
+  public RecommissionNodesResponseProto recommissionNodes(
+      RecommissionNodesRequestProto request) throws IOException {
+    impl.recommissionNodes(request.getHostsList());
+    return RecommissionNodesResponseProto.newBuilder().build();
+  }
+
+  public StartMaintenanceNodesResponseProto startMaintenanceNodes(
+      StartMaintenanceNodesRequestProto request) throws IOException {
+    impl.startMaintenanceNodes(request.getHostsList(),
+        (int)request.getEndInHours());
+    return StartMaintenanceNodesResponseProto.newBuilder()
+        .build();
+  }
+
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/ContainerSafeModeRule.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/ContainerSafeModeRule.java
index dbb4eb6..ffa0209 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/ContainerSafeModeRule.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/ContainerSafeModeRule.java
@@ -23,19 +23,18 @@
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.atomic.AtomicLong;
 
-import com.google.common.base.Preconditions;
-import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
 import org.apache.hadoop.hdds.scm.events.SCMEvents;
-import org.apache.hadoop.hdds.scm.server.SCMDatanodeProtocolServer
-    .NodeRegistrationContainerReport;
-
-import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeProtocolServer.NodeRegistrationContainerReport;
 import org.apache.hadoop.hdds.server.events.EventQueue;
 import org.apache.hadoop.hdds.server.events.TypedEvent;
 
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+
 /**
  * Class defining Safe mode exit criteria for Containers.
  */
@@ -128,7 +127,11 @@
 
   @Override
   public String getStatusText() {
-    return "currentContainerThreshold " + getCurrentContainerThreshold()
-        + " >= safeModeCutoff " + this.safeModeCutoff;
+    return String
+        .format(
+            "%% of containers with at least one reported replica (=%1.2f) >= "
+                + "safeModeCutoff (=%1.2f)",
+            getCurrentContainerThreshold(), this.safeModeCutoff);
   }
+
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/DataNodeSafeModeRule.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/DataNodeSafeModeRule.java
index fefe4d4..ea5a78f 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/DataNodeSafeModeRule.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/DataNodeSafeModeRule.java
@@ -82,7 +82,8 @@
 
   @Override
   public String getStatusText() {
-    return "registeredDns " + this.registeredDns + " >= requiredDns "
-        + this.requiredDns;
+    return String
+        .format("registered datanodes (=%d) >= required datanodes (=%d)",
+            this.registeredDns, this.requiredDns);
   }
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/HealthyPipelineSafeModeRule.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/HealthyPipelineSafeModeRule.java
index b6ab0d0..d8c5778 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/HealthyPipelineSafeModeRule.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/HealthyPipelineSafeModeRule.java
@@ -149,8 +149,9 @@
 
   @Override
   public String getStatusText() {
-    return "currentHealthyPipelineCount " + this.currentHealthyPipelineCount
-        + " >= healthyPipelineThresholdCount "
-        + this.healthyPipelineThresholdCount;
+    return String.format("healthy Ratis/THREE pipelines (=%d) >= "
+            + "healthyPipelineThresholdCount (=%d)",
+        this.currentHealthyPipelineCount,
+        this.healthyPipelineThresholdCount);
   }
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/OneReplicaPipelineSafeModeRule.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/OneReplicaPipelineSafeModeRule.java
index e243622..5268bc9 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/OneReplicaPipelineSafeModeRule.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/OneReplicaPipelineSafeModeRule.java
@@ -17,10 +17,12 @@
 
 package org.apache.hadoop.hdds.scm.safemode;
 
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Preconditions;
-import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.stream.Collectors;
+
 import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.PipelineReport;
 import org.apache.hadoop.hdds.scm.events.SCMEvents;
@@ -31,13 +33,12 @@
 import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.PipelineReportFromDatanode;
 import org.apache.hadoop.hdds.server.events.EventQueue;
 import org.apache.hadoop.hdds.server.events.TypedEvent;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.util.HashSet;
-import java.util.Set;
-import java.util.stream.Collectors;
-
 /**
  * This rule covers whether we have at least one datanode is reported for each
  * open pipeline. This rule is for all open containers, we have at least one
@@ -149,8 +150,11 @@
 
   @Override
   public String getStatusText() {
-    return "currentReportedPipelineCount "
-        + this.currentReportedPipelineCount + " >= thresholdCount "
-        + this.thresholdCount;
+    return String
+        .format(
+            "reported Ratis/THREE pipelines with at least one datanode (=%d) "
+                + ">= threshold (=%d)",
+            this.currentReportedPipelineCount,
+            this.thresholdCount);
   }
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMCertStore.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMCertStore.java
index b23d938..e2602ee 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMCertStore.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMCertStore.java
@@ -22,12 +22,17 @@
 import java.io.IOException;
 import java.math.BigInteger;
 import java.security.cert.X509Certificate;
+import java.util.ArrayList;
+import java.util.List;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.metadata.SCMMetadataStore;
 import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
 import org.apache.hadoop.hdds.security.x509.certificate.authority.CertificateStore;
 import org.apache.hadoop.hdds.utils.db.BatchOperation;
+import org.apache.hadoop.hdds.utils.db.Table;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -112,4 +117,41 @@
       return scmMetadataStore.getRevokedCertsTable().get(serialID);
     }
   }
+
+  @Override
+  public List<X509Certificate> listCertificate(HddsProtos.NodeType role,
+      BigInteger startSerialID, int count, CertType certType)
+      throws IOException {
+    // TODO: Filter by role
+    List<? extends Table.KeyValue<BigInteger, X509Certificate>> certs;
+    if (startSerialID.longValue() == 0) {
+      startSerialID = null;
+    }
+    if (certType == CertType.VALID_CERTS) {
+      certs = scmMetadataStore.getValidCertsTable().getRangeKVs(
+          startSerialID, count);
+    } else {
+      certs = scmMetadataStore.getRevokedCertsTable().getRangeKVs(
+          startSerialID, count);
+    }
+    List<X509Certificate> results = new ArrayList<>(certs.size());
+    for (Table.KeyValue<BigInteger, X509Certificate> kv : certs) {
+      try {
+        X509Certificate cert = kv.getValue();
+        // TODO: filter certificate based on CN and specified role.
+        // This requires change of the approved subject CN format:
+        // Subject: O=CID-e66d4728-32bb-4282-9770-351a7e913f07,
+        // OU=9a7c4f86-c862-4067-b12c-e7bca51d3dfe, CN=root@98dba189d5f0
+
+        // The new format will look like below that are easier to filter.
+        // CN=FQDN/user=root/role=datanode/...
+        results.add(cert);
+      } catch (IOException e) {
+        LOG.error("Fail to list certificate from SCM metadata store", e);
+        throw new SCMSecurityException(
+            "Fail to list certificate from SCM metadata store.");
+      }
+    }
+    return results;
+  }
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
index 49a4233..28199fa 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
@@ -21,12 +21,17 @@
  */
 package org.apache.hadoop.hdds.scm.server;
 
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.Maps;
+import com.google.protobuf.BlockingService;
 import java.io.IOException;
 import java.net.InetSocketAddress;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
 import java.util.Map;
+import java.util.Set;
+import java.util.TreeSet;
 import java.util.stream.Collectors;
 
 import org.apache.commons.lang3.tuple.Pair;
@@ -38,23 +43,25 @@
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos;
 import org.apache.hadoop.hdds.scm.ScmInfo;
 import org.apache.hadoop.hdds.scm.ScmUtils;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException;
+import org.apache.hadoop.hdds.scm.safemode.SafeModePrecheck;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
 import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
 import org.apache.hadoop.hdds.scm.container.ContainerReplica;
 import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
-import org.apache.hadoop.hdds.scm.events.SCMEvents;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
-import org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException;
 import org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol;
 import org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocolServerSideTranslatorPB;
 import org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolPB;
 import org.apache.hadoop.hdds.scm.safemode.SCMSafeModeManager.SafeModeStatus;
-import org.apache.hadoop.hdds.scm.safemode.SafeModePrecheck;
 import org.apache.hadoop.hdds.server.events.EventHandler;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
 import org.apache.hadoop.hdds.utils.HddsServerUtil;
@@ -71,10 +78,6 @@
 import org.apache.hadoop.ozone.audit.Auditor;
 import org.apache.hadoop.ozone.audit.SCMAction;
 
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Preconditions;
-import com.google.common.collect.Maps;
-import com.google.protobuf.BlockingService;
 import com.google.protobuf.ProtocolMessageEnum;
 import static org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos.StorageContainerLocationProtocolService.newReflectiveBlockingService;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY;
@@ -383,7 +386,8 @@
   }
 
   @Override
-  public List<HddsProtos.Node> queryNode(HddsProtos.NodeState state,
+  public List<HddsProtos.Node> queryNode(
+      HddsProtos.NodeOperationalState opState, HddsProtos.NodeState state,
       HddsProtos.QueryScope queryScope, String poolName) throws
       IOException {
 
@@ -392,13 +396,57 @@
     }
 
     List<HddsProtos.Node> result = new ArrayList<>();
-    queryNode(state).forEach(node -> result.add(HddsProtos.Node.newBuilder()
-        .setNodeID(node.getProtoBufMessage())
-        .addNodeStates(state)
-        .build()));
-
+    for (DatanodeDetails node : queryNode(opState, state)) {
+      try {
+        NodeStatus ns = scm.getScmNodeManager().getNodeStatus(node);
+        result.add(HddsProtos.Node.newBuilder()
+            .setNodeID(node.getProtoBufMessage())
+            .addNodeStates(ns.getHealth())
+            .addNodeOperationalStates(ns.getOperationalState())
+            .build());
+      } catch (NodeNotFoundException e) {
+        throw new IOException(
+            "An unexpected error occurred querying the NodeStatus", e);
+      }
+    }
     return result;
+  }
 
+  @Override
+  public void decommissionNodes(List<String> nodes) throws IOException {
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+      scm.getScmDecommissionManager().decommissionNodes(nodes);
+    } catch (Exception ex) {
+      LOG.error("Failed to decommission nodes", ex);
+      throw ex;
+    }
+  }
+
+  @Override
+  public void recommissionNodes(List<String> nodes) throws IOException {
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+      scm.getScmDecommissionManager().recommissionNodes(nodes);
+    } catch (Exception ex) {
+      LOG.error("Failed to recommission nodes", ex);
+      throw ex;
+    }
+  }
+
+  @Override
+  public void startMaintenanceNodes(List<String> nodes, int endInHours)
+      throws IOException {
+    String remoteUser = getRpcRemoteUsername();
+    try {
+      getScm().checkAdminAccess(remoteUser);
+      scm.getScmDecommissionManager().startMaintenanceNodes(nodes, endInHours);
+    } catch (Exception ex) {
+      LOG.error("Failed to place nodes into maintenance mode", ex);
+      throw ex;
+    }
   }
 
   @Override
@@ -463,6 +511,8 @@
   @Override
   public void deactivatePipeline(HddsProtos.PipelineID pipelineID)
       throws IOException {
+    String remoteUser = getRemoteUserName();
+    getScm().checkAdminAccess(remoteUser);
     AUDIT.logReadSuccess(buildAuditMessageForSuccess(
         SCMAction.DEACTIVATE_PIPELINE, null));
     scm.getPipelineManager().deactivatePipeline(
@@ -472,6 +522,8 @@
   @Override
   public void closePipeline(HddsProtos.PipelineID pipelineID)
       throws IOException {
+    String remoteUser = getRemoteUserName();
+    getScm().checkAdminAccess(remoteUser);
     Map<String, String> auditMap = Maps.newHashMap();
     auditMap.put("pipelineID", pipelineID.getId());
     PipelineManager pipelineManager = scm.getPipelineManager();
@@ -535,6 +587,8 @@
    */
   @Override
   public boolean forceExitSafeMode() throws IOException {
+    String remoteUser = getRemoteUserName();
+    getScm().checkAdminAccess(remoteUser);
     AUDIT.logWriteSuccess(
         buildAuditMessageForSuccess(SCMAction.FORCE_EXIT_SAFE_MODE, null)
     );
@@ -542,14 +596,18 @@
   }
 
   @Override
-  public void startReplicationManager() {
+  public void startReplicationManager() throws IOException {
+    String remoteUser = getRemoteUserName();
+    getScm().checkAdminAccess(remoteUser);
     AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
         SCMAction.START_REPLICATION_MANAGER, null));
     scm.getReplicationManager().start();
   }
 
   @Override
-  public void stopReplicationManager() {
+  public void stopReplicationManager() throws IOException {
+    String remoteUser = getRemoteUserName();
+    getScm().checkAdminAccess(remoteUser);
     AUDIT.logWriteSuccess(buildAuditMessageForSuccess(
         SCMAction.STOP_REPLICATION_MANAGER, null));
     scm.getReplicationManager().stop();
@@ -585,12 +643,13 @@
    * operation between the
    * operators.
    *
-   * @param state - NodeStates.
+   * @param opState - NodeOperational State
+   * @param state - NodeState.
    * @return List of Datanodes.
    */
-  public List<DatanodeDetails> queryNode(HddsProtos.NodeState state) {
-    Preconditions.checkNotNull(state, "Node Query set cannot be null");
-    return queryNodeState(state);
+  public List<DatanodeDetails> queryNode(
+      HddsProtos.NodeOperationalState opState, HddsProtos.NodeState state) {
+    return new ArrayList<>(queryNodeState(opState, state));
   }
 
   @VisibleForTesting
@@ -609,11 +668,19 @@
   /**
    * Query the System for Nodes.
    *
+   * @params opState - The node operational state
    * @param nodeState - NodeState that we are interested in matching.
-   * @return List of Datanodes that match the NodeState.
+   * @return Set of Datanodes that match the NodeState.
    */
-  private List<DatanodeDetails> queryNodeState(HddsProtos.NodeState nodeState) {
-    return scm.getScmNodeManager().getNodes(nodeState);
+  private Set<DatanodeDetails> queryNodeState(
+      HddsProtos.NodeOperationalState opState, HddsProtos.NodeState nodeState) {
+    Set<DatanodeDetails> returnSet = new TreeSet<>();
+    List<DatanodeDetails> tmp = scm.getScmNodeManager()
+        .getNodes(opState, nodeState);
+    if ((tmp != null) && (tmp.size() > 0)) {
+      returnSet.addAll(tmp);
+    }
+    return returnSet;
   }
 
   @Override
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMContainerMetrics.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMContainerMetrics.java
index 475c000..d7d47a7 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMContainerMetrics.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMContainerMetrics.java
@@ -27,6 +27,7 @@
 import java.util.Map;
 
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.metrics2.MetricsCollector;
 import org.apache.hadoop.metrics2.MetricsSource;
 import org.apache.hadoop.metrics2.MetricsSystem;
@@ -66,6 +67,11 @@
   public void getMetrics(MetricsCollector collector, boolean all) {
     Map<String, Integer> stateCount = scmmxBean.getContainerStateCount();
 
+    int totalContainers = 0;
+    for (HddsProtos.LifeCycleState state : HddsProtos.LifeCycleState.values()) {
+      totalContainers = totalContainers + stateCount.get(state.toString());
+    }
+
     collector.addRecord(SOURCE)
         .addGauge(Interns.info("OpenContainers",
             "Number of open containers"),
@@ -84,6 +90,9 @@
             stateCount.get(DELETING.toString()))
         .addGauge(Interns.info("DeletedContainers",
             "Number of containers in deleted state"),
-            stateCount.get(DELETED.toString()));
+            stateCount.get(DELETED.toString()))
+        .addGauge(Interns.info("TotalContainers",
+            "Number of all containers"),
+            totalContainers);
   }
 }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
index 87a3462..c7837f4 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
@@ -72,6 +72,7 @@
 import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
 import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
 import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.ozone.protocol.commands.SetNodeOperationalStateCommand;
 import org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolPB;
 import org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolServerSideTranslatorPB;
 import org.apache.hadoop.security.authorize.PolicyProvider;
@@ -89,6 +90,7 @@
 import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type.finalizeNewLayoutVersionCommand;
 import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type.replicateContainerCommand;
 import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type.reregisterCommand;
+import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type.setNodeOperationalStateCommand;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DATANODE_ADDRESS_KEY;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_HANDLER_COUNT_DEFAULT;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_HANDLER_COUNT_KEY;
@@ -356,6 +358,12 @@
             .setFinalizeNewLayoutVersionCommandProto(
                 ((FinalizeNewLayoutVersionCommand)cmd).getProto())
             .build();
+    case setNodeOperationalStateCommand:
+      return builder
+          .setCommandType(setNodeOperationalStateCommand)
+          .setSetNodeOperationalStateCommandProto(
+              ((SetNodeOperationalStateCommand)cmd).getProto())
+          .build();
     default:
       throw new IllegalArgumentException("Scm command " +
           cmd.getType().toString() + " is not implemented");
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMMXBean.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMMXBean.java
index f10a544..6e6d440 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMMXBean.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMMXBean.java
@@ -18,11 +18,11 @@
 
 package org.apache.hadoop.hdds.scm.server;
 
+import java.util.Map;
+
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
 import org.apache.hadoop.hdds.server.ServiceRuntimeInfo;
 
-import java.util.Map;
-
 /**
  *
  * This is the JMX management interface for scm information.
@@ -65,7 +65,7 @@
    */
   Map<String, Integer> getContainerStateCount();
 
-  Map<String, String> getRuleStatusMetrics();
+  Map<String, String[]> getSafeModeRuleStatus();
 
   String getScmId();
 
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
index 7f7553c..8b8eff4 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
@@ -22,6 +22,8 @@
 import java.net.InetSocketAddress;
 import java.security.cert.CertificateException;
 import java.security.cert.X509Certificate;
+import java.util.ArrayList;
+import java.util.List;
 import java.util.Objects;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.Future;
@@ -29,11 +31,13 @@
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.OzoneManagerDetailsProto;
 import org.apache.hadoop.hdds.protocol.proto.SCMSecurityProtocolProtos;
 import org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolPB;
 import org.apache.hadoop.hdds.scm.protocol.SCMSecurityProtocolServerSideTranslatorPB;
+import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
 import org.apache.hadoop.hdds.utils.HddsServerUtil;
 import org.apache.hadoop.hdds.scm.ScmConfig;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
@@ -69,7 +73,6 @@
   SCMSecurityProtocolServer(OzoneConfiguration conf,
       CertificateServer certificateServer) throws IOException {
     this.certificateServer = certificateServer;
-
     final int handlerCount =
         conf.getInt(ScmConfigKeys.OZONE_SCM_SECURITY_HANDLER_COUNT_KEY,
             ScmConfigKeys.OZONE_SCM_SECURITY_HANDLER_COUNT_DEFAULT);
@@ -192,6 +195,33 @@
     }
   }
 
+  /**
+   *
+   * @param role            - node role: OM/SCM/DN.
+   * @param startSerialId   - start certificate serial id.
+   * @param count           - max number of certificates returned in a batch.
+   * @param isRevoked       - whether list for revoked certs only.
+   * @return
+   * @throws IOException
+   */
+  @Override
+  public List<String> listCertificate(HddsProtos.NodeType role,
+      long startSerialId, int count, boolean isRevoked) throws IOException {
+    List<X509Certificate> certificates =
+        certificateServer.listCertificate(role, startSerialId, count,
+            isRevoked);
+    List<String> results = new ArrayList<>(certificates.size());
+    for (X509Certificate cert : certificates) {
+      try {
+        String certStr = CertificateCodec.getPEMEncodedString(cert);
+        results.add(certStr);
+      } catch (SCMSecurityException e) {
+        throw new IOException("listCertificate operation failed. ", e);
+      }
+    }
+    return results;
+  }
+
   public RPC.Server getRpcServer() {
     return rpcServer;
   }
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
index d77524b..d984122 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
@@ -34,7 +34,6 @@
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.TimeUnit;
 
-import org.apache.commons.lang3.tuple.Pair;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdds.HddsUtils;
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
@@ -76,6 +75,7 @@
 import org.apache.hadoop.hdds.scm.net.NetworkTopologyImpl;
 import org.apache.hadoop.hdds.scm.node.DeadNodeHandler;
 import org.apache.hadoop.hdds.scm.node.NewNodeHandler;
+import org.apache.hadoop.hdds.scm.node.StartDatanodeAdminHandler;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
 import org.apache.hadoop.hdds.scm.node.NodeReportHandler;
 import org.apache.hadoop.hdds.scm.node.NonHealthyToReadOnlyHealthyNodeHandler;
@@ -84,6 +84,7 @@
 import org.apache.hadoop.hdds.scm.node.StaleNodeHandler;
 import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.node.NodeDecommissionManager;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineActionHandler;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineReportHandler;
@@ -126,6 +127,7 @@
 import com.google.common.cache.CacheBuilder;
 import com.google.common.cache.RemovalListener;
 import com.google.protobuf.BlockingService;
+import org.apache.commons.lang3.tuple.Pair;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.HDDS_SCM_WATCHER_TIMEOUT_DEFAULT;
 import static org.apache.hadoop.hdds.scm.pipeline.Pipeline.PipelineState.CLOSED;
 
@@ -173,6 +175,7 @@
   private ContainerManager containerManager;
   private BlockManager scmBlockManager;
   private final SCMStorageConfig scmStorageConfig;
+  private NodeDecommissionManager scmDecommissionManager;
 
   private SCMMetadataStore scmMetadataStore;
 
@@ -310,11 +313,14 @@
     CommandStatusReportHandler cmdStatusReportHandler =
         new CommandStatusReportHandler();
 
-    NewNodeHandler newNodeHandler = new NewNodeHandler(pipelineManager, conf);
+    NewNodeHandler newNodeHandler = new NewNodeHandler(pipelineManager,
+        scmDecommissionManager, conf);
     StaleNodeHandler staleNodeHandler =
         new StaleNodeHandler(scmNodeManager, pipelineManager, conf);
     DeadNodeHandler deadNodeHandler = new DeadNodeHandler(scmNodeManager,
         pipelineManager, containerManager);
+    StartDatanodeAdminHandler datanodeStartAdminHandler =
+        new StartDatanodeAdminHandler(scmNodeManager, pipelineManager);
     ReadOnlyHealthyToHealthyNodeHandler readOnlyHealthyToHealthyNodeHandler =
         new ReadOnlyHealthyToHealthyNodeHandler(pipelineManager, conf);
     NonHealthyToReadOnlyHealthyNodeHandler
@@ -347,7 +353,6 @@
     blockProtocolServer = new SCMBlockProtocolServer(conf, this);
     clientProtocolServer = new SCMClientProtocolServer(conf, this);
     httpServer = new StorageContainerManagerHttpServer(conf);
-
     eventQueue.addHandler(SCMEvents.DATANODE_COMMAND, scmNodeManager);
     eventQueue.addHandler(SCMEvents.RETRIABLE_DATANODE_COMMAND, scmNodeManager);
     eventQueue.addHandler(SCMEvents.NODE_REPORT, nodeReportHandler);
@@ -363,6 +368,8 @@
     eventQueue.addHandler(SCMEvents.NON_HEALTHY_TO_READONLY_HEALTHY_NODE,
         nonHealthyToReadOnlyHealthyNodeHandler);
     eventQueue.addHandler(SCMEvents.DEAD_NODE, deadNodeHandler);
+    eventQueue.addHandler(SCMEvents.START_ADMIN_ON_NODE,
+        datanodeStartAdminHandler);
     eventQueue.addHandler(SCMEvents.CMD_STATUS_REPORT, cmdStatusReportHandler);
     eventQueue
         .addHandler(SCMEvents.PENDING_DELETE_STATUS, pendingDeleteHandler);
@@ -463,6 +470,8 @@
       scmSafeModeManager = new SCMSafeModeManager(conf,
           containerManager.getContainers(), pipelineManager, eventQueue);
     }
+    scmDecommissionManager = new NodeDecommissionManager(conf, scmNodeManager,
+        containerManager, eventQueue, replicationManager);
   }
 
   /**
@@ -851,6 +860,13 @@
     }
 
     try {
+      LOG.info("Stopping the Datanode Admin Monitor.");
+      scmDecommissionManager.stop();
+    } catch (Exception ex) {
+      LOG.error("The Datanode Admin Monitor failed to stop", ex);
+    }
+
+    try {
       LOG.info("Stopping Lease Manager of the command watchers");
       commandWatcherLeaseManager.shutdown();
     } catch (Exception ex) {
@@ -965,7 +981,18 @@
    * @return int -- count
    */
   public int getNodeCount(NodeState nodestate) {
-    return scmNodeManager.getNodeCount(nodestate);
+    // TODO - decomm - this probably needs to accept opState and health
+    return scmNodeManager.getNodeCount(null, nodestate);
+  }
+
+  /**
+   * Returns the node decommission manager.
+   *
+   * @return NodeDecommissionManager The decommission manger for the used by
+   *         scm
+   */
+  public NodeDecommissionManager getScmDecommissionManager() {
+    return scmDecommissionManager;
   }
 
   /**
@@ -1144,11 +1171,13 @@
   }
 
   @Override
-  public Map<String, String> getRuleStatusMetrics() {
-    Map<String, String> map = new HashMap<>();
+  public Map<String, String[]> getSafeModeRuleStatus() {
+    Map<String, String[]> map = new HashMap<>();
     for (Map.Entry<String, Pair<Boolean, String>> entry :
         scmSafeModeManager.getRuleStatus().entrySet()) {
-      map.put(entry.getKey(), entry.getValue().getRight());
+      String[] status =
+          {entry.getValue().getRight(), entry.getValue().getLeft().toString()};
+      map.put(entry.getKey(), status);
     }
     return map;
   }
diff --git a/hadoop-hdds/server-scm/src/main/resources/webapps/scm/scm-overview.html b/hadoop-hdds/server-scm/src/main/resources/webapps/scm/scm-overview.html
index 4e900bb..a6f4fdf 100644
--- a/hadoop-hdds/server-scm/src/main/resources/webapps/scm/scm-overview.html
+++ b/hadoop-hdds/server-scm/src/main/resources/webapps/scm/scm-overview.html
@@ -60,10 +60,18 @@
 <h2>Safemode rules statuses</h2>
 
 <table class="table table-bordered table-striped" class="col-md-6">
+    <thead>
+    <tr>
+        <th>Rule Id</th>
+        <th>Rule definition</th>
+        <th>Passed</th>
+    </tr>
+    </thead>
     <tbody>
-    <tr ng-repeat="typestat in $ctrl.overview.jmx.RuleStatusMetrics">
+    <tr ng-repeat="typestat in $ctrl.overview.jmx.SafeModeRuleStatus">
         <td>{{typestat.key}}</td>
-        <td>{{typestat.value}}</td>
+        <td>{{typestat.value[0]}}</td>
+        <td>{{typestat.value[1]}}</td>
     </tr>
     </tbody>
 </table>
\ No newline at end of file
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
index 6b6e8d8..82bdd60 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
@@ -34,6 +34,7 @@
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMCommandProto;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.TestUtils;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.container.MockNodeManager;
@@ -505,8 +506,8 @@
     nodeManager.setNumHealthyVolumes(1);
     // create pipelines
     for (int i = 0;
-         i < nodeManager.getNodes(HddsProtos.NodeState.HEALTHY).size() / factor
-             .getNumber(); i++) {
+         i < nodeManager.getNodes(NodeStatus.inServiceHealthy()).size()
+             / factor.getNumber(); i++) {
       pipelineManager.createPipeline(type, factor);
     }
     TestUtils.openAllRatisPipelines(pipelineManager);
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
index 96cd832..184a264 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
@@ -26,6 +26,7 @@
 import org.apache.hadoop.hdds.scm.net.NetworkTopology;
 import org.apache.hadoop.hdds.scm.net.NetworkTopologyImpl;
 import org.apache.hadoop.hdds.scm.net.Node;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
 import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
@@ -37,6 +38,7 @@
 import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.NodeReportProto;
 import org.apache.hadoop.hdds.protocol.proto
@@ -175,14 +177,28 @@
     this.safemode = safemode;
   }
 
+
   /**
    * Gets all Live Datanodes that is currently communicating with SCM.
    *
-   * @param nodestate - State of the node
+   * @param status The status of the node
    * @return List of Datanodes that are Heartbeating SCM.
    */
   @Override
-  public List<DatanodeDetails> getNodes(HddsProtos.NodeState nodestate) {
+  public List<DatanodeDetails> getNodes(NodeStatus status) {
+    return getNodes(status.getOperationalState(), status.getHealth());
+  }
+
+  /**
+   * Gets all Live Datanodes that is currently communicating with SCM.
+   *
+   * @param opState - The operational State of the node
+   * @param nodestate - The health of the node
+   * @return List of Datanodes that are Heartbeating SCM.
+   */
+  @Override
+  public List<DatanodeDetails> getNodes(
+      HddsProtos.NodeOperationalState opState, HddsProtos.NodeState nodestate) {
     if (nodestate == HEALTHY) {
       return healthyNodes;
     }
@@ -201,12 +217,24 @@
   /**
    * Returns the Number of Datanodes that are communicating with SCM.
    *
+   * @param status - Status of the node
+   * @return int -- count
+   */
+  @Override
+  public int getNodeCount(NodeStatus status) {
+    return getNodeCount(status.getOperationalState(), status.getHealth());
+  }
+
+  /**
+   * Returns the Number of Datanodes that are communicating with SCM.
+   *
    * @param nodestate - State of the node
    * @return int -- count
    */
   @Override
-  public int getNodeCount(HddsProtos.NodeState nodestate) {
-    List<DatanodeDetails> nodes = getNodes(nodestate);
+  public int getNodeCount(
+      HddsProtos.NodeOperationalState opState, HddsProtos.NodeState nodestate) {
+    List<DatanodeDetails> nodes = getNodes(opState, nodestate);
     if (nodes != null) {
       return nodes.size();
     }
@@ -263,11 +291,31 @@
    * @return Healthy/Stale/Dead.
    */
   @Override
-  public HddsProtos.NodeState getNodeState(DatanodeDetails dd) {
+  public NodeStatus getNodeStatus(DatanodeDetails dd)
+      throws NodeNotFoundException {
     return null;
   }
 
   /**
+   * Set the operation state of a node.
+   * @param datanodeDetails The datanode to set the new state for
+   * @param newState The new operational state for the node
+   */
+  public void setNodeOperationalState(DatanodeDetails datanodeDetails,
+      HddsProtos.NodeOperationalState newState) throws NodeNotFoundException {
+  }
+
+  /**
+   * Set the operation state of a node.
+   * @param datanodeDetails The datanode to set the new state for
+   * @param newState The new operational state for the node
+   */
+  public void setNodeOperationalState(DatanodeDetails datanodeDetails,
+      HddsProtos.NodeOperationalState newState, long opStateExpiryEpocSec)
+      throws NodeNotFoundException {
+  }
+
+  /**
    * Get set of pipelines a datanode is part of.
    * @param dnId - datanodeID
    * @return Set of PipelineID
@@ -516,12 +564,23 @@
   }
 
   @Override
-  public Map<String, Integer> getNodeCount() {
-    Map<String, Integer> nodeCountMap = new HashMap<String, Integer>();
-    for (HddsProtos.NodeState state : HddsProtos.NodeState.values()) {
-      nodeCountMap.put(state.toString(), getNodeCount(state));
+  public Map<String, Map<String, Integer>> getNodeCount() {
+    Map<String, Map<String, Integer>> nodes = new HashMap<>();
+    for (NodeOperationalState opState : NodeOperationalState.values()) {
+      Map<String, Integer> states = new HashMap<>();
+      for (HddsProtos.NodeState health : HddsProtos.NodeState.values()) {
+        states.put(health.name(), 0);
+      }
+      nodes.put(opState.name(), states);
     }
-    return nodeCountMap;
+    // At the moment MockNodeManager is not aware of decommission and
+    // maintenance states, therefore loop over all nodes and assume all nodes
+    // are IN_SERVICE. This will be fixed as part of HDDS-2673
+    for (HddsProtos.NodeState state : HddsProtos.NodeState.values()) {
+      nodes.get(NodeOperationalState.IN_SERVICE.name())
+          .compute(state.name(), (k, v) -> v + 1);
+    }
+    return nodes;
   }
 
   @Override
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/SimpleMockNodeManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/SimpleMockNodeManager.java
new file mode 100644
index 0000000..30d3a4e
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/SimpleMockNodeManager.java
@@ -0,0 +1,332 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.LayoutVersionProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.NodeReportProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.PipelineReportsProto;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
+import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.node.DatanodeInfo;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.protocol.VersionResponse;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Basic implementation of the NodeManager interface which can be used in tests.
+ *
+ * TODO - Merge the functionality with MockNodeManager, as it needs refactored
+ *        after the introduction of decommission and maintenance states.
+ */
+public class SimpleMockNodeManager implements NodeManager {
+
+  private Map<UUID, DatanodeInfo> nodeMap = new ConcurrentHashMap<>();
+  private Map<UUID, Set<PipelineID>> pipelineMap = new ConcurrentHashMap<>();
+  private Map<UUID, Set<ContainerID>> containerMap = new ConcurrentHashMap<>();
+
+  public void register(DatanodeDetails dd, NodeStatus status) {
+    dd.setPersistedOpState(status.getOperationalState());
+    dd.setPersistedOpStateExpiryEpochSec(status.getOpStateExpiryEpochSeconds());
+    nodeMap.put(dd.getUuid(), new DatanodeInfo(dd, status, null));
+  }
+
+  public void setNodeStatus(DatanodeDetails dd, NodeStatus status) {
+    dd.setPersistedOpState(status.getOperationalState());
+    dd.setPersistedOpStateExpiryEpochSec(status.getOpStateExpiryEpochSeconds());
+    DatanodeInfo dni = nodeMap.get(dd.getUuid());
+    dni.setNodeStatus(status);
+  }
+
+  /**
+   * Set the number of pipelines for the given node. This simply generates
+   * new PipelineID objects and places them in a set. No actual pipelines are
+   * created.
+   *
+   * Setting the count to zero effectively deletes the pipelines for the node
+   *
+   * @param dd The DatanodeDetails for which to create the pipelines
+   * @param count The number of pipelines to create or zero to delete all
+   *              pipelines
+   */
+  public void setPipelines(DatanodeDetails dd, int count) {
+    Set<PipelineID> pipelines = new HashSet<>();
+    for (int i=0; i<count; i++) {
+      pipelines.add(PipelineID.randomId());
+    }
+    pipelineMap.put(dd.getUuid(), pipelines);
+  }
+
+  /**
+   * If the given node was registed with the nodeManager, return the
+   * NodeStatus for the node. Otherwise return a NodeStatus of "In Service
+   * and Healthy".
+   * @param datanodeDetails DatanodeDetails
+   * @return The NodeStatus of the node if it is registered, otherwise an
+   *         Inservice and Healthy NodeStatus.
+   */
+  @Override
+  public NodeStatus getNodeStatus(DatanodeDetails datanodeDetails)
+      throws NodeNotFoundException {
+    DatanodeInfo dni = nodeMap.get(datanodeDetails.getUuid());
+    if (dni != null) {
+      return dni.getNodeStatus();
+    } else {
+      return NodeStatus.inServiceHealthy();
+    }
+  }
+
+  @Override
+  public void setNodeOperationalState(DatanodeDetails dn,
+      HddsProtos.NodeOperationalState newState) throws NodeNotFoundException {
+    setNodeOperationalState(dn, newState, 0);
+  }
+
+  @Override
+  public void setNodeOperationalState(DatanodeDetails dn,
+      HddsProtos.NodeOperationalState newState, long opStateExpiryEpocSec)
+      throws NodeNotFoundException {
+    DatanodeInfo dni = nodeMap.get(dn.getUuid());
+    if (dni == null) {
+      throw new NodeNotFoundException();
+    }
+    dni.setNodeStatus(
+        new NodeStatus(
+            newState, dni.getNodeStatus().getHealth(), opStateExpiryEpocSec));
+  }
+
+  /**
+   * Return the set of PipelineID associated with the given DatanodeDetails.
+   *
+   * If there are no pipelines, null is return, to mirror the behaviour of
+   * SCMNodeManager.
+   *
+   * @param datanodeDetails The datanode for which to return the pipelines
+   * @return A set of PipelineID or null if there are none
+   */
+  @Override
+  public Set<PipelineID> getPipelines(DatanodeDetails datanodeDetails) {
+    Set<PipelineID> p = pipelineMap.get(datanodeDetails.getUuid());
+    if (p == null || p.size() == 0) {
+      return null;
+    } else {
+      return p;
+    }
+  }
+
+  @Override
+  public int getPipelinesCount(DatanodeDetails datanodeDetails) {
+    return 0;
+  }
+
+  @Override
+  public void setContainers(DatanodeDetails dn,
+      Set<ContainerID> containerIds) throws NodeNotFoundException {
+    containerMap.put(dn.getUuid(), containerIds);
+  }
+
+  /**
+   * Return the set of ContainerID associated with the datanode. If there are no
+   * container present, an empty set is return to mirror the behaviour of
+   * SCMNodeManaer
+   *
+   * @param dn The datanodeDetails for which to return the containers
+   * @return A Set of ContainerID or an empty Set if none are present
+   * @throws NodeNotFoundException
+   */
+  @Override
+  public Set<ContainerID> getContainers(DatanodeDetails dn)
+      throws NodeNotFoundException {
+    // The concrete implementation of this method in SCMNodeManager will return
+    // an empty set if there are no containers, and will never return null.
+    return containerMap
+        .computeIfAbsent(dn.getUuid(), key -> new HashSet<>());
+  }
+
+  /**
+   * Below here, are all auto-generate placeholder methods to implement the
+   * interface.
+   */
+
+  @Override
+  public List<DatanodeDetails> getNodes(NodeStatus nodeStatus) {
+    return null;
+  }
+
+  @Override
+  public List<DatanodeDetails> getNodes(
+      HddsProtos.NodeOperationalState opState, HddsProtos.NodeState health) {
+    return null;
+  }
+
+  @Override
+  public int getNodeCount(NodeStatus nodeStatus) {
+    return 0;
+  }
+
+  @Override
+  public int getNodeCount(HddsProtos.NodeOperationalState opState,
+                          HddsProtos.NodeState health) {
+    return 0;
+  }
+
+  @Override
+  public List<DatanodeDetails> getAllNodes() {
+    return null;
+  }
+
+  @Override
+  public SCMNodeStat getStats() {
+    return null;
+  }
+
+  @Override
+  public Map<DatanodeDetails, SCMNodeStat> getNodeStats() {
+    return null;
+  }
+
+  @Override
+  public SCMNodeMetric getNodeStat(DatanodeDetails datanodeDetails) {
+    return null;
+  }
+
+  @Override
+  public void addPipeline(Pipeline pipeline) {
+  }
+
+  @Override
+  public void removePipeline(Pipeline pipeline) {
+  }
+
+  @Override
+  public void addContainer(DatanodeDetails datanodeDetails,
+      ContainerID containerId) throws NodeNotFoundException {
+  }
+
+
+
+  @Override
+  public void addDatanodeCommand(UUID dnId, SCMCommand command) {
+  }
+
+  @Override
+  public void processNodeReport(DatanodeDetails datanodeDetails,
+      NodeReportProto nodeReport) {
+  }
+
+  @Override
+  public void processLayoutVersionReport(DatanodeDetails datanodeDetails,
+                                         LayoutVersionProto layoutReport) {
+  }
+
+  @Override
+  public List<SCMCommand> getCommandQueue(UUID dnID) {
+    return null;
+  }
+
+  @Override
+  public DatanodeDetails getNodeByUuid(String uuid) {
+    return null;
+  }
+
+  @Override
+  public List<DatanodeDetails> getNodesByAddress(String address) {
+    return null;
+  }
+
+  @Override
+  public NetworkTopology getClusterNetworkTopologyMap() {
+    return null;
+  }
+
+  @Override
+  public int minHealthyVolumeNum(List<DatanodeDetails> dnList) {
+    return 0;
+  }
+
+  @Override
+  public int pipelineLimit(DatanodeDetails dn) {
+    return 1;
+  }
+
+  @Override
+  public int minPipelineLimit(List<DatanodeDetails> dn) {
+    return 0;
+  }
+
+  @Override
+  public void close() throws IOException {
+
+  }
+
+  @Override
+  public Map<String, Map<String, Integer>> getNodeCount() {
+    return null;
+  }
+
+  @Override
+  public Map<String, Long> getNodeInfo() {
+    return null;
+  }
+
+  @Override
+  public void onMessage(CommandForDatanode commandForDatanode,
+                        EventPublisher publisher) {
+  }
+
+  @Override
+  public VersionResponse getVersion(
+      StorageContainerDatanodeProtocolProtos.SCMVersionRequestProto
+          versionRequest) {
+    return null;
+  }
+
+  @Override
+  public RegisteredCommand register(DatanodeDetails datanodeDetails,
+                                    NodeReportProto nodeReport,
+                                    PipelineReportsProto pipelineReport,
+                                    LayoutVersionProto layoutreport) {
+    return null;
+  }
+
+  @Override
+  public List<SCMCommand> processHeartbeat(DatanodeDetails datanodeDetails,
+                                           LayoutVersionProto layoutInfo) {
+    return null;
+  }
+
+  @Override
+  public Boolean isNodeRegistered(DatanodeDetails datanodeDetails) {
+    return null;
+  }
+
+}
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
index 205fea8..979e37f 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerReportHandler.java
@@ -20,7 +20,6 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
@@ -28,6 +27,7 @@
     .StorageContainerDatanodeProtocolProtos.ContainerReplicaProto;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.scm.server
     .SCMDatanodeHeartbeatDispatcher.ContainerReportFromDatanode;
@@ -116,7 +116,7 @@
     final ContainerReportHandler reportHandler = new ContainerReportHandler(
         nodeManager, containerManager);
     final Iterator<DatanodeDetails> nodeIterator = nodeManager.getNodes(
-        NodeState.HEALTHY).iterator();
+        NodeStatus.inServiceHealthy()).iterator();
     final DatanodeDetails datanodeOne = nodeIterator.next();
     final DatanodeDetails datanodeTwo = nodeIterator.next();
     final DatanodeDetails datanodeThree = nodeIterator.next();
@@ -185,7 +185,7 @@
         nodeManager, containerManager);
 
     final Iterator<DatanodeDetails> nodeIterator = nodeManager.getNodes(
-        NodeState.HEALTHY).iterator();
+        NodeStatus.inServiceHealthy()).iterator();
     final DatanodeDetails datanodeOne = nodeIterator.next();
     final DatanodeDetails datanodeTwo = nodeIterator.next();
     final DatanodeDetails datanodeThree = nodeIterator.next();
@@ -264,7 +264,7 @@
         nodeManager, containerManager);
 
     final Iterator<DatanodeDetails> nodeIterator = nodeManager.getNodes(
-        NodeState.HEALTHY).iterator();
+        NodeStatus.inServiceHealthy()).iterator();
     final DatanodeDetails datanodeOne = nodeIterator.next();
     final DatanodeDetails datanodeTwo = nodeIterator.next();
     final DatanodeDetails datanodeThree = nodeIterator.next();
@@ -343,7 +343,7 @@
         nodeManager, containerManager);
 
     final Iterator<DatanodeDetails> nodeIterator = nodeManager.getNodes(
-        NodeState.HEALTHY).iterator();
+        NodeStatus.inServiceHealthy()).iterator();
     final DatanodeDetails datanodeOne = nodeIterator.next();
     final DatanodeDetails datanodeTwo = nodeIterator.next();
     final DatanodeDetails datanodeThree = nodeIterator.next();
@@ -420,7 +420,7 @@
     final ContainerReportHandler reportHandler = new ContainerReportHandler(
         nodeManager, containerManager);
     final Iterator<DatanodeDetails> nodeIterator = nodeManager.getNodes(
-        NodeState.HEALTHY).iterator();
+        NodeStatus.inServiceHealthy()).iterator();
 
     final DatanodeDetails datanodeOne = nodeIterator.next();
     final DatanodeDetails datanodeTwo = nodeIterator.next();
@@ -491,7 +491,7 @@
     final ContainerReportHandler reportHandler = new ContainerReportHandler(
         nodeManager, containerManager);
     final Iterator<DatanodeDetails> nodeIterator = nodeManager.getNodes(
-        NodeState.HEALTHY).iterator();
+        NodeStatus.inServiceHealthy()).iterator();
 
     final DatanodeDetails datanodeOne = nodeIterator.next();
     final DatanodeDetails datanodeTwo = nodeIterator.next();
@@ -562,7 +562,7 @@
     final ContainerReportHandler reportHandler = new ContainerReportHandler(
         nodeManager, containerManager);
     final Iterator<DatanodeDetails> nodeIterator = nodeManager.getNodes(
-        NodeState.HEALTHY).iterator();
+        NodeStatus.inServiceHealthy()).iterator();
 
     final DatanodeDetails datanodeOne = nodeIterator.next();
     final DatanodeDetails datanodeTwo = nodeIterator.next();
@@ -635,7 +635,7 @@
         nodeManager, containerManager);
 
     final Iterator<DatanodeDetails> nodeIterator = nodeManager.getNodes(
-        NodeState.HEALTHY).iterator();
+        NodeStatus.inServiceHealthy()).iterator();
     final DatanodeDetails datanodeOne = nodeIterator.next();
     final ContainerInfo containerOne = getContainer(LifeCycleState.DELETED);
 
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestReplicationManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestReplicationManager.java
index 1426ae3..bc34c97 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestReplicationManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestReplicationManager.java
@@ -18,21 +18,24 @@
 
 package org.apache.hadoop.hdds.scm.container;
 
+import com.google.common.primitives.Longs;
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.SCMCommandProto;
-import org.apache.hadoop.hdds.scm.container.ReplicationManager.ReplicationManagerConfiguration;
+import org.apache.hadoop.hdds.scm.container.ReplicationManager
+    .ReplicationManagerConfiguration;
 import org.apache.hadoop.hdds.scm.PlacementPolicy;
 import org.apache.hadoop.hdds.scm.container.placement.algorithms.ContainerPlacementStatusDefault;
 import org.apache.hadoop.hdds.scm.events.SCMEvents;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.node.SCMNodeManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.server.events.EventHandler;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
 import org.apache.hadoop.hdds.server.events.EventQueue;
@@ -61,6 +64,13 @@
 import java.util.stream.IntStream;
 
 import static org.apache.hadoop.hdds.protocol.MockDatanodeDetails.createDatanodeDetails;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.DECOMMISSIONED;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.DECOMMISSIONING;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.IN_MAINTENANCE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.IN_SERVICE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
+import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State.CLOSED;
 import static org.apache.hadoop.hdds.scm.TestUtils.getContainer;
 import static org.apache.hadoop.hdds.scm.TestUtils.getReplicas;
 import static org.apache.hadoop.hdds.protocol.MockDatanodeDetails.randomDatanodeDetails;
@@ -75,13 +85,17 @@
   private PlacementPolicy containerPlacementPolicy;
   private EventQueue eventQueue;
   private DatanodeCommandHandler datanodeCommandHandler;
+  private SimpleMockNodeManager nodeManager;
+  private ContainerManager containerManager;
+  private ConfigurationSource conf;
   private SCMNodeManager scmNodeManager;
 
   @Before
-  public void setup() throws IOException, InterruptedException {
-    final ConfigurationSource conf = new OzoneConfiguration();
-    final ContainerManager containerManager =
-        Mockito.mock(ContainerManager.class);
+  public void setup()
+      throws IOException, InterruptedException, NodeNotFoundException {
+    conf = new OzoneConfiguration();
+    containerManager = Mockito.mock(ContainerManager.class);
+    nodeManager = new SimpleMockNodeManager();
     eventQueue = new EventQueue();
     containerStateManager = new ContainerStateManager(conf);
 
@@ -121,9 +135,9 @@
         });
 
     scmNodeManager = Mockito.mock(SCMNodeManager.class);
-    Mockito.when(scmNodeManager.getNodeState(
+    Mockito.when(scmNodeManager.getNodeStatus(
         Mockito.any(DatanodeDetails.class)))
-        .thenReturn(NodeState.HEALTHY);
+        .thenReturn(NodeStatus.inServiceHealthy());
 
     replicationManager = new ReplicationManager(
         new ReplicationManagerConfiguration(),
@@ -131,7 +145,21 @@
         containerPlacementPolicy,
         eventQueue,
         new LockManager<>(conf),
-        scmNodeManager);
+        nodeManager);
+    replicationManager.start();
+    Thread.sleep(100L);
+  }
+
+  private void createReplicationManager(ReplicationManagerConfiguration rmConf)
+      throws InterruptedException {
+    replicationManager = new ReplicationManager(
+        rmConf,
+        containerManager,
+        containerPlacementPolicy,
+        eventQueue,
+        new LockManager<ContainerID>(conf),
+        nodeManager);
+
     replicationManager.start();
     Thread.sleep(100L);
   }
@@ -596,7 +624,7 @@
       throws SCMException, ContainerNotFoundException, InterruptedException {
     final ContainerInfo container = getContainer(LifeCycleState.CLOSED);
     final ContainerID id = container.containerID();
-    final Set<ContainerReplica> replicas = getReplicas(id, State.CLOSED,
+    final Set<ContainerReplica> replicas = getReplicas(id, CLOSED,
         randomDatanodeDetails(),
         randomDatanodeDetails(),
         randomDatanodeDetails());
@@ -845,6 +873,243 @@
         .getInvocationCount(SCMCommandProto.Type.deleteContainerCommand));
   }
 
+  /**
+   * ReplicationManager should replicate an additional replica if there are
+   * decommissioned replicas.
+   */
+  @Test
+  public void testUnderReplicatedDueToDecommission() throws
+      SCMException, ContainerNotFoundException, InterruptedException {
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, new NodeStatus(IN_SERVICE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONING, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONING, HEALTHY), CLOSED);
+    assertReplicaScheduled(2);
+  }
+
+  /**
+   * ReplicationManager should replicate an additional replica when all copies
+   * are decommissioning.
+   */
+  @Test
+  public void testUnderReplicatedDueToAllDecommission() throws
+      SCMException, ContainerNotFoundException, InterruptedException {
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONING, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONING, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONING, HEALTHY), CLOSED);
+    assertReplicaScheduled(3);
+  }
+
+  /**
+   * ReplicationManager should not take any action when the container is
+   * correctly replicated with decommissioned replicas still present.
+   */
+  @Test
+  public void testCorrectlyReplicatedWithDecommission() throws
+      SCMException, ContainerNotFoundException, InterruptedException {
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, new NodeStatus(IN_SERVICE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_SERVICE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_SERVICE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONING, HEALTHY), CLOSED);
+    assertReplicaScheduled(0);
+  }
+
+  /**
+   * ReplicationManager should replicate an additional replica when min rep
+   * is not met for maintenance.
+   */
+  @Test
+  public void testUnderReplicatedDueToMaintenance() throws
+      SCMException, ContainerNotFoundException, InterruptedException {
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, new NodeStatus(IN_SERVICE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    assertReplicaScheduled(1);
+  }
+
+  /**
+   * ReplicationManager should not replicate an additional replica when if
+   * min replica for maintenance is 1 and another replica is available.
+   */
+  @Test
+  public void testNotUnderReplicatedDueToMaintenanceMinRepOne() throws
+      SCMException, ContainerNotFoundException, InterruptedException {
+    replicationManager.stop();
+    ReplicationManagerConfiguration newConf =
+        new ReplicationManagerConfiguration();
+    newConf.setMaintenanceReplicaMinimum(1);
+    createReplicationManager(newConf);
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, new NodeStatus(IN_SERVICE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    assertReplicaScheduled(0);
+  }
+
+  /**
+   * ReplicationManager should replicate an additional replica when all copies
+   * are going off line and min rep is 1.
+   */
+  @Test
+  public void testUnderReplicatedDueToMaintenanceMinRepOne() throws
+      SCMException, ContainerNotFoundException, InterruptedException {
+    replicationManager.stop();
+    ReplicationManagerConfiguration newConf =
+        new ReplicationManagerConfiguration();
+    newConf.setMaintenanceReplicaMinimum(1);
+    createReplicationManager(newConf);
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    assertReplicaScheduled(1);
+  }
+
+  /**
+   * ReplicationManager should replicate additional replica when all copies
+   * are going into maintenance.
+   */
+  @Test
+  public void testUnderReplicatedDueToAllMaintenance() throws
+      SCMException, ContainerNotFoundException, InterruptedException {
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    assertReplicaScheduled(2);
+  }
+
+  /**
+   * ReplicationManager should not replicate additional replica sufficient
+   * replica are available.
+   */
+  @Test
+  public void testCorrectlyReplicatedWithMaintenance() throws
+      SCMException, ContainerNotFoundException, InterruptedException {
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, new NodeStatus(IN_SERVICE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_SERVICE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    assertReplicaScheduled(0);
+  }
+
+  /**
+   * ReplicationManager should replicate additional replica when all copies
+   * are decommissioning or maintenance.
+   */
+  @Test
+  public void testUnderReplicatedWithDecommissionAndMaintenance() throws
+      SCMException, ContainerNotFoundException, InterruptedException {
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONED, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONED, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    assertReplicaScheduled(2);
+  }
+
+  /**
+   * When a CLOSED container is over replicated, ReplicationManager
+   * deletes the excess replicas. While choosing the replica for deletion
+   * ReplicationManager should not attempt to remove a DECOMMISSION or
+   * MAINTENANCE replica.
+   */
+  @Test
+  public void testOverReplicatedClosedContainerWithDecomAndMaint()
+      throws SCMException, ContainerNotFoundException, InterruptedException {
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, NodeStatus.inServiceHealthy(), CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONED, HEALTHY), CLOSED);
+    addReplica(container, new NodeStatus(IN_MAINTENANCE, HEALTHY), CLOSED);
+    addReplica(container, NodeStatus.inServiceHealthy(), CLOSED);
+    addReplica(container, NodeStatus.inServiceHealthy(), CLOSED);
+    addReplica(container, NodeStatus.inServiceHealthy(), CLOSED);
+    addReplica(container, NodeStatus.inServiceHealthy(), CLOSED);
+
+    final int currentDeleteCommandCount = datanodeCommandHandler
+        .getInvocationCount(SCMCommandProto.Type.deleteContainerCommand);
+
+    replicationManager.processContainersNow();
+    // Wait for EventQueue to call the event handler
+    Thread.sleep(100L);
+    Assert.assertEquals(currentDeleteCommandCount + 2, datanodeCommandHandler
+        .getInvocationCount(SCMCommandProto.Type.deleteContainerCommand));
+    // Get the DECOM and Maint replica and ensure none of them are scheduled
+    // for removal
+    Set<ContainerReplica> decom =
+        containerStateManager.getContainerReplicas(container.containerID())
+        .stream()
+        .filter(r -> r.getDatanodeDetails().getPersistedOpState() != IN_SERVICE)
+        .collect(Collectors.toSet());
+    for (ContainerReplica r : decom) {
+      Assert.assertFalse(datanodeCommandHandler.received(
+          SCMCommandProto.Type.deleteContainerCommand,
+          r.getDatanodeDetails()));
+    }
+  }
+
+  /**
+   * Replication Manager should not attempt to replicate from an unhealthy
+   * (stale or dead) node. To test this, setup a scenario where a replia needs
+   * to be created, but mark all nodes stale. That way, no new replica will be
+   * scheduled.
+   */
+  @Test
+  public void testUnderReplicatedNotHealthySource()
+      throws SCMException, ContainerNotFoundException, InterruptedException {
+    final ContainerInfo container = createContainer(LifeCycleState.CLOSED);
+    addReplica(container, NodeStatus.inServiceStale(), CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONED, STALE), CLOSED);
+    addReplica(container, new NodeStatus(DECOMMISSIONED, STALE), CLOSED);
+    // There should be replica scheduled, but as all nodes are stale, nothing
+    // gets scheduled.
+    assertReplicaScheduled(0);
+  }
+
+  private ContainerInfo createContainer(LifeCycleState containerState)
+      throws SCMException {
+    final ContainerInfo container = getContainer(containerState);
+    final ContainerID id = container.containerID();
+    containerStateManager.loadContainer(container);
+    return container;
+  }
+
+  private ContainerReplica addReplica(ContainerInfo container,
+      NodeStatus nodeStatus, State replicaState)
+      throws ContainerNotFoundException {
+    DatanodeDetails dn = randomDatanodeDetails();
+    dn.setPersistedOpState(nodeStatus.getOperationalState());
+    dn.setPersistedOpStateExpiryEpochSec(
+        nodeStatus.getOpStateExpiryEpochSeconds());
+    nodeManager.register(dn, nodeStatus);
+    // Using the same originID for all replica in the container set. If each
+    // replica has a unique originID, it causes problems in ReplicationManager
+    // when processing over-replicated containers.
+    final UUID originNodeId =
+        UUID.nameUUIDFromBytes(Longs.toByteArray(container.getContainerID()));
+    final ContainerReplica replica = getReplicas(
+        container.containerID(), CLOSED, 1000L, originNodeId, dn);
+    containerStateManager
+        .updateContainerReplica(container.containerID(), replica);
+    return replica;
+  }
+
+  private void assertReplicaScheduled(int delta) throws InterruptedException {
+    final int currentReplicateCommandCount = datanodeCommandHandler
+        .getInvocationCount(SCMCommandProto.Type.replicateContainerCommand);
+
+    replicationManager.processContainersNow();
+    // Wait for EventQueue to call the event handler
+    Thread.sleep(100L);
+    Assert.assertEquals(currentReplicateCommandCount + delta,
+        datanodeCommandHandler.getInvocationCount(
+            SCMCommandProto.Type.replicateContainerCommand));
+  }
+
   @After
   public void teardown() throws IOException {
     containerStateManager.close();
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestUnknownContainerReport.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestUnknownContainerReport.java
index 1c2cdd0..f2e4968 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestUnknownContainerReport.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestUnknownContainerReport.java
@@ -27,7 +27,6 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.ContainerReplicaProto;
 import org.apache.hadoop.hdds.protocol.proto
@@ -35,6 +34,7 @@
 import org.apache.hadoop.hdds.scm.ScmConfig;
 import org.apache.hadoop.hdds.scm.events.SCMEvents;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.server
     .SCMDatanodeHeartbeatDispatcher.ContainerReportFromDatanode;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
@@ -107,7 +107,7 @@
 
     ContainerInfo container = getContainer(LifeCycleState.CLOSED);
     Iterator<DatanodeDetails> nodeIterator = nodeManager
-        .getNodes(NodeState.HEALTHY).iterator();
+        .getNodes(NodeStatus.inServiceHealthy()).iterator();
     DatanodeDetails datanode = nodeIterator.next();
 
     ContainerReportsProto containerReport = getContainerReportsProto(
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestContainerPlacementFactory.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestContainerPlacementFactory.java
index 842c494..aa506cb 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestContainerPlacementFactory.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestContainerPlacementFactory.java
@@ -23,7 +23,6 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.ContainerPlacementStatus;
 import org.apache.hadoop.hdds.scm.PlacementPolicy;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
@@ -34,15 +33,17 @@
 import org.apache.hadoop.hdds.scm.net.NodeSchema;
 import org.apache.hadoop.hdds.scm.net.NodeSchemaManager;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
 
 import static org.apache.hadoop.hdds.scm.net.NetConstants.LEAF_SCHEMA;
 import static org.apache.hadoop.hdds.scm.net.NetConstants.RACK_SCHEMA;
 import static org.apache.hadoop.hdds.scm.net.NetConstants.ROOT_SCHEMA;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Test;
+
 import static org.mockito.Matchers.anyObject;
-import org.mockito.Mockito;
 import static org.mockito.Mockito.when;
 
 /**
@@ -89,7 +90,7 @@
 
     // create mock node manager
     nodeManager = Mockito.mock(NodeManager.class);
-    when(nodeManager.getNodes(NodeState.HEALTHY))
+    when(nodeManager.getNodes(NodeStatus.inServiceHealthy()))
         .thenReturn(new ArrayList<>(datanodes));
     when(nodeManager.getNodeStat(anyObject()))
         .thenReturn(new SCMNodeMetric(storageCapacity, 0L, 100L));
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementCapacity.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementCapacity.java
index afefc9a..ee9c029 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementCapacity.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementCapacity.java
@@ -25,11 +25,11 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
 
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.junit.Assert;
 import org.junit.Test;
 import static org.mockito.Matchers.anyObject;
@@ -51,7 +51,7 @@
     }
 
     NodeManager mockNodeManager = Mockito.mock(NodeManager.class);
-    when(mockNodeManager.getNodes(NodeState.HEALTHY))
+    when(mockNodeManager.getNodes(NodeStatus.inServiceHealthy()))
         .thenReturn(new ArrayList<>(datanodes));
 
     when(mockNodeManager.getNodeStat(anyObject()))
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java
index 5019ed4..1c332b7 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java
@@ -25,7 +25,6 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.ContainerPlacementStatus;
 import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
@@ -35,6 +34,13 @@
 import org.apache.hadoop.hdds.scm.net.NodeSchema;
 import org.apache.hadoop.hdds.scm.net.NodeSchemaManager;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.mockito.Mockito;
 
 import org.apache.commons.lang3.StringUtils;
 import static org.apache.hadoop.hdds.scm.net.NetConstants.LEAF_SCHEMA;
@@ -42,18 +48,12 @@
 import static org.apache.hadoop.hdds.scm.net.NetConstants.ROOT_SCHEMA;
 import org.hamcrest.MatcherAssert;
 import static org.hamcrest.Matchers.greaterThanOrEqualTo;
-import org.junit.Assert;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 import static org.junit.Assume.assumeTrue;
-import org.junit.Before;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
 import static org.mockito.Matchers.anyObject;
-import org.mockito.Mockito;
 import static org.mockito.Mockito.when;
 
 /**
@@ -107,10 +107,10 @@
 
     // create mock node manager
     nodeManager = Mockito.mock(NodeManager.class);
+    when(nodeManager.getNodes(NodeStatus.inServiceHealthy()))
+        .thenReturn(new ArrayList<>(datanodes));
     when(nodeManager.getClusterNetworkTopologyMap())
         .thenReturn(cluster);
-    when(nodeManager.getNodes(NodeState.HEALTHY))
-        .thenReturn(new ArrayList<>(datanodes));
     when(nodeManager.getNodeStat(anyObject()))
         .thenReturn(new SCMNodeMetric(STORAGE_CAPACITY, 0L, 100L));
     if (datanodeCount > 4) {
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
index fb8d2e0..416c3f2 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRandom.java
@@ -23,12 +23,12 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.ContainerPlacementStatus;
 import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
 
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.junit.Assert;
 import org.junit.Test;
 import static junit.framework.TestCase.assertEquals;
@@ -54,7 +54,7 @@
     }
 
     NodeManager mockNodeManager = Mockito.mock(NodeManager.class);
-    when(mockNodeManager.getNodes(NodeState.HEALTHY))
+    when(mockNodeManager.getNodes(NodeStatus.inServiceHealthy()))
         .thenReturn(new ArrayList<>(datanodes));
 
     when(mockNodeManager.getNodeStat(anyObject()))
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/states/TestContainerReplicaCount.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/states/TestContainerReplicaCount.java
new file mode 100644
index 0000000..3c7c952
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/states/TestContainerReplicaCount.java
@@ -0,0 +1,465 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.container.states;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerReplica;
+import org.apache.hadoop.hdds.scm.container.ContainerReplicaCount;
+import org.junit.Before;
+import org.junit.Test;
+import java.util.*;
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos
+    .NodeOperationalState.DECOMMISSIONED;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos
+    .NodeOperationalState.DECOMMISSIONING;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos
+    .NodeOperationalState.ENTERING_MAINTENANCE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos
+    .NodeOperationalState.IN_MAINTENANCE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos
+    .NodeOperationalState.IN_SERVICE;
+import static org.apache.hadoop.hdds.protocol.proto
+    .StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State.CLOSED;
+import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State.OPEN;
+import static org.junit.Assert.assertFalse;
+
+/**
+ * Class used to test the ContainerReplicaCount class.
+ */
+public class TestContainerReplicaCount {
+
+  @Before
+  public void setup() {
+  }
+
+  @Test
+  public void testThreeHealthyReplica() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, true, 0, false);
+  }
+
+  @Test
+  public void testTwoHealthyReplica() {
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, false, 1, false);
+  }
+
+  @Test
+  public void testOneHealthyReplica() {
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, false, 2, false);
+  }
+
+  @Test
+  public void testTwoHealthyAndInflightAdd() {
+
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 1, 0, 3, 2);
+    validate(rcnt, false, 0, false);
+  }
+
+  @Test
+  /**
+   * This does not schedule a container to be removed, as the inFlight add may
+   * fail and then the delete would make things under-replicated. Once the add
+   * completes there will be 4 healthy and it will get taken care of then.
+   */
+  public void testThreeHealthyAndInflightAdd() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 1, 0, 3, 2);
+    validate(rcnt, true, 0, false);
+  }
+
+  @Test
+  /**
+   * As the inflight delete may fail, but as it will make the the container
+   * under replicated, we go ahead and schedule another replica to be added.
+   */
+  public void testThreeHealthyAndInflightDelete() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 1, 3, 2);
+    validate(rcnt, false, 1, false);
+  }
+
+  @Test
+  /**
+   * This is NOT sufficiently replicated as the inflight add may fail and the
+   * inflight del could succeed, leaving only 2 healthy replicas.
+   */
+  public void testThreeHealthyAndInflightAddAndInFlightDelete() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 1, 1, 3, 2);
+    validate(rcnt, false, 0, false);
+  }
+
+  @Test
+  public void testFourHealthyReplicas() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, true, -1, true);
+  }
+
+  @Test
+  public void testFourHealthyReplicasAndInFlightDelete() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 1, 3, 2);
+    validate(rcnt, true, 0, false);
+  }
+
+  @Test
+  public void testFourHealthyReplicasAndTwoInFlightDelete() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 2, 3, 2);
+    validate(rcnt, false, 1, false);
+  }
+
+  @Test
+  public void testOneHealthyReplicaRepFactorOne() {
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 1, 2);
+    validate(rcnt, true, 0, false);
+  }
+
+  @Test
+  public void testOneHealthyReplicaRepFactorOneInFlightDelete() {
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 1, 1, 2);
+    validate(rcnt, false, 1, false);
+  }
+
+  @Test
+  public void testTwoHealthyReplicaTwoInflightAdd() {
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 2, 0, 3, 2);
+    validate(rcnt, false, 0, false);
+  }
+
+  /**
+   * From here consider decommission replicas.
+   */
+
+  @Test
+  public void testThreeHealthyAndTwoDecommission() {
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE, IN_SERVICE,
+        IN_SERVICE, DECOMMISSIONING, DECOMMISSIONING);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, true, 0, false);
+  }
+
+  @Test
+  public void testOneDecommissionedReplica() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, DECOMMISSIONING);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, false, 1, false);
+  }
+
+  @Test
+  public void testTwoHealthyOneDecommissionedneInFlightAdd() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, DECOMMISSIONED);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 1, 0, 3, 2);
+    validate(rcnt, false, 0, false);
+  }
+
+  @Test
+  public void testAllDecommissioned() {
+    Set<ContainerReplica> replica =
+        registerNodes(DECOMMISSIONED, DECOMMISSIONED, DECOMMISSIONED);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, false, 3, false);
+  }
+
+  @Test
+  public void testAllDecommissionedRepFactorOne() {
+    Set<ContainerReplica> replica = registerNodes(DECOMMISSIONED);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 1, 2);
+    validate(rcnt, false, 1, false);
+
+  }
+
+  @Test
+  public void testAllDecommissionedRepFactorOneInFlightAdd() {
+    Set<ContainerReplica> replica = registerNodes(DECOMMISSIONED);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 1, 0, 1, 2);
+    validate(rcnt, false, 0, false);
+  }
+
+  @Test
+  public void testOneHealthyOneDecommissioningRepFactorOne() {
+    Set<ContainerReplica> replica = registerNodes(DECOMMISSIONED, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 1, 2);
+    validate(rcnt, true, 0, false);
+  }
+
+  /**
+   * Maintenance tests from here.
+   */
+
+  @Test
+  public void testOneHealthyTwoMaintenanceMinRepOfTwo() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_MAINTENANCE, IN_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, false, 1, false);
+  }
+
+  @Test
+  public void testOneHealthyThreeMaintenanceMinRepOfTwo() {
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE,
+        IN_MAINTENANCE, IN_MAINTENANCE, ENTERING_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, false, 1, false);
+  }
+
+  @Test
+  public void testOneHealthyTwoMaintenanceMinRepOfOne() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_MAINTENANCE, ENTERING_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 1);
+    validate(rcnt, true, 0, false);
+  }
+
+  @Test
+  public void testOneHealthyThreeMaintenanceMinRepOfTwoInFlightAdd() {
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE,
+        IN_MAINTENANCE, ENTERING_MAINTENANCE, IN_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 1, 0, 3, 2);
+    validate(rcnt, false, 0, false);
+  }
+
+  @Test
+  public void testAllMaintenance() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_MAINTENANCE, ENTERING_MAINTENANCE, IN_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, false, 2, false);
+  }
+
+  @Test
+  /**
+   * As we have exactly 3 healthy, but then an excess of maintenance copies
+   * we ignore the over-replication caused by the maintenance copies until they
+   * come back online, and then deal with them.
+   */
+  public void testThreeHealthyTwoInMaintenance() {
+    Set<ContainerReplica> replica = registerNodes(IN_SERVICE, IN_SERVICE,
+        IN_SERVICE, IN_MAINTENANCE, ENTERING_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, true, 0, false);
+  }
+
+  @Test
+  /**
+   * This is somewhat similar to testThreeHealthyTwoInMaintenance() except now
+   * one of the maintenance copies has become healthy and we will need to remove
+   * the over-replicated healthy container.
+   */
+  public void testFourHealthyOneInMaintenance() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE, IN_SERVICE,
+            IN_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    validate(rcnt, true, -1, true);
+  }
+
+  @Test
+  public void testOneMaintenanceMinRepOfTwoRepFactorOne() {
+    Set<ContainerReplica> replica = registerNodes(IN_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 1, 2);
+    validate(rcnt, false, 1, false);
+  }
+
+  @Test
+  public void testOneMaintenanceMinRepOfTwoRepFactorOneInFlightAdd() {
+    Set<ContainerReplica> replica = registerNodes(IN_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 1, 0, 1, 2);
+    validate(rcnt, false, 0, false);
+  }
+
+  @Test
+  public void testOneHealthyOneMaintenanceRepFactorOne() {
+    Set<ContainerReplica> replica = registerNodes(IN_MAINTENANCE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 1, 2);
+    validate(rcnt, true, 0, false);
+  }
+
+  @Test
+  public void testTwoDecomTwoMaintenanceOneInflightAdd() {
+    Set<ContainerReplica> replica =
+        registerNodes(DECOMMISSIONED, DECOMMISSIONING,
+            IN_MAINTENANCE, ENTERING_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 1, 0, 3, 2);
+    validate(rcnt, false, 1, false);
+  }
+
+  @Test
+  public void testHealthyContainerIsHealthy() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    assertTrue(rcnt.isHealthy());
+  }
+
+  @Test
+  public void testIsHealthyWithDifferentReplicaStateNotHealthy() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_SERVICE);
+    for (ContainerReplica r : replica) {
+      DatanodeDetails dn = r.getDatanodeDetails();
+
+      ContainerReplica replace = new ContainerReplica.ContainerReplicaBuilder()
+          .setContainerID(new ContainerID(1))
+          .setContainerState(OPEN)
+          .setDatanodeDetails(dn)
+          .setOriginNodeId(dn.getUuid())
+          .setSequenceId(1)
+          .build();
+      replica.remove(r);
+      replica.add(replace);
+      break;
+    }
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    assertFalse(rcnt.isHealthy());
+  }
+
+  @Test
+  public void testIsHealthyWithMaintReplicaIsHealthy() {
+    Set<ContainerReplica> replica =
+        registerNodes(IN_SERVICE, IN_SERVICE, IN_MAINTENANCE,
+            ENTERING_MAINTENANCE);
+    ContainerInfo container = createContainer(HddsProtos.LifeCycleState.CLOSED);
+    ContainerReplicaCount rcnt =
+        new ContainerReplicaCount(container, replica, 0, 0, 3, 2);
+    assertTrue(rcnt.isHealthy());
+  }
+
+  private void validate(ContainerReplicaCount rcnt,
+      boolean sufficientlyReplicated, int replicaDelta,
+      boolean overReplicated) {
+    assertEquals(sufficientlyReplicated, rcnt.isSufficientlyReplicated());
+    assertEquals(overReplicated, rcnt.isOverReplicated());
+    assertEquals(replicaDelta, rcnt.additionalReplicaNeeded());
+  }
+
+  private Set<ContainerReplica> registerNodes(
+      HddsProtos.NodeOperationalState... states) {
+    Set<ContainerReplica> replica = new HashSet<>();
+    for (HddsProtos.NodeOperationalState s : states) {
+      DatanodeDetails dn = MockDatanodeDetails.randomDatanodeDetails();
+      dn.setPersistedOpState(s);
+      replica.add(new ContainerReplica.ContainerReplicaBuilder()
+          .setContainerID(new ContainerID(1))
+          .setContainerState(CLOSED)
+          .setDatanodeDetails(dn)
+          .setOriginNodeId(dn.getUuid())
+          .setSequenceId(1)
+          .build());
+    }
+    return replica;
+  }
+
+  private ContainerInfo createContainer(HddsProtos.LifeCycleState state) {
+    return new ContainerInfo.Builder()
+        .setContainerID(new ContainerID(1).getId())
+        .setState(state)
+        .build();
+  }
+}
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
index 453609a..2fec55c 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
@@ -177,7 +177,7 @@
 
       //TODO: wait for heartbeat to be processed
       Thread.sleep(4 * 1000);
-      assertEquals(nodeCount, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(nodeCount, nodeManager.getNodeCount(null, HEALTHY));
       assertEquals(capacity * nodeCount,
           (long) nodeManager.getStats().getCapacity().get());
       assertEquals(used * nodeCount,
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDatanodeAdminMonitor.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDatanodeAdminMonitor.java
new file mode 100644
index 0000000..33fe35f
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDatanodeAdminMonitor.java
@@ -0,0 +1,530 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.protocol.proto
+    .StorageContainerDatanodeProtocolProtos.ContainerReplicaProto;
+import org.apache.hadoop.hdds.scm.container.*;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.hdds.server.events.EventQueue;
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.DECOMMISSIONED;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.IN_MAINTENANCE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.IN_SERVICE;
+import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State.CLOSED;
+import static org.mockito.Mockito.reset;
+
+/**
+ * Tests to ensure the DatanodeAdminMonitor is working correctly. This class
+ * uses mocks or basic implementations of the key classes outside of the
+ * datanodeAdminMonitor to allow it to be tested in isolation.
+ */
+public class TestDatanodeAdminMonitor {
+
+  private SimpleMockNodeManager nodeManager;
+  private OzoneConfiguration conf;
+  private DatanodeAdminMonitorImpl monitor;
+  private DatanodeAdminHandler startAdminHandler;
+  private ReplicationManager repManager;
+  private EventQueue eventQueue;
+
+  @Before
+  public void setup() throws IOException, AuthenticationException {
+    conf = new OzoneConfiguration();
+
+    eventQueue = new EventQueue();
+    startAdminHandler = new DatanodeAdminHandler();
+    eventQueue.addHandler(SCMEvents.START_ADMIN_ON_NODE, startAdminHandler);
+
+    nodeManager = new SimpleMockNodeManager();
+
+    repManager = Mockito.mock(ReplicationManager.class);
+
+    monitor =
+        new DatanodeAdminMonitorImpl(conf, eventQueue, nodeManager, repManager);
+  }
+
+  @After
+  public void teardown() {
+  }
+
+  @Test
+  public void testNodeCanBeQueuedAndCancelled() {
+    DatanodeDetails dn = MockDatanodeDetails.randomDatanodeDetails();
+    monitor.startMonitoring(dn);
+    assertEquals(1, monitor.getPendingCount());
+
+    monitor.stopMonitoring(dn);
+    assertEquals(0, monitor.getPendingCount());
+    assertEquals(1, monitor.getCancelledCount());
+
+    monitor.startMonitoring(dn);
+    assertEquals(1, monitor.getPendingCount());
+    assertEquals(0, monitor.getCancelledCount());
+  }
+
+  /**
+   * In this test we ensure there are some pipelines for the node being
+   * decommissioned, but there are no containers. Therefore the workflow
+   * must wait until the pipelines have closed before completing the flow.
+   */
+  @Test
+  public void testClosePipelinesEventFiredWhenAdminStarted()
+      throws NodeNotFoundException{
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+            HddsProtos.NodeState.HEALTHY));
+    // Ensure the node has some pipelines
+    nodeManager.setPipelines(dn1, 2);
+    // Add the node to the monitor
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    // Ensure a StartAdmin event was fired
+    eventQueue.processAll(20000);
+    assertEquals(1, startAdminHandler.getInvocation());
+    // Ensure a node is now tracked for decommission
+    assertEquals(1, monitor.getTrackedNodeCount());
+    // Ensure the node remains decommissioning
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+    // Run the monitor again, and it should remain decommissioning
+    monitor.run();
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+
+    // Clear the pipelines and the node should transition to DECOMMISSIONED
+    nodeManager.setPipelines(dn1, 0);
+    monitor.run();
+    assertEquals(0, monitor.getTrackedNodeCount());
+    assertEquals(DECOMMISSIONED,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+  }
+
+  /**
+   * In this test, there are no open pipelines and no containers on the node.
+   * Therefore, we expect the decommission flow to finish on the first run
+   * on the monitor, leaving zero nodes tracked and the node in DECOMMISSIONED
+   * state.
+   */
+  @Test
+  public void testDecommissionNodeTransitionsToCompleteWhenNoContainers()
+      throws NodeNotFoundException {
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+            HddsProtos.NodeState.HEALTHY));
+
+    // Add the node to the monitor. By default we have zero pipelines and
+    // zero containers in the test setup, so the node should immediately
+    // transition to COMPLETED state
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    assertEquals(0, monitor.getTrackedNodeCount());
+    NodeStatus newStatus = nodeManager.getNodeStatus(dn1);
+    assertEquals(DECOMMISSIONED,
+        newStatus.getOperationalState());
+  }
+
+  @Test
+  public void testDecommissionNodeWaitsForContainersToReplicate()
+      throws NodeNotFoundException, ContainerNotFoundException {
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+            HddsProtos.NodeState.HEALTHY));
+
+    nodeManager.setContainers(dn1, generateContainers(3));
+    // Mock Replication Manager to return ContainerReplicaCount's which
+    // always have a DECOMMISSIONED replica.
+    mockGetContainerReplicaCount(
+        HddsProtos.LifeCycleState.CLOSED,
+        DECOMMISSIONED,
+        IN_SERVICE,
+        IN_SERVICE);
+
+    // Run the monitor for the first time and the node will transition to
+    // REPLICATE_CONTAINERS as there are no pipelines to close.
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    DatanodeDetails node = getFirstTrackedNode();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+
+    // Running the monitor again causes it to remain DECOMMISSIONING
+    // as nothing has changed.
+    monitor.run();
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+
+    // Now change the replicationManager mock to return 3 CLOSED replicas
+    // and the node should complete the REPLICATE_CONTAINERS step, moving to
+    // complete which will end the decommission workflow
+    mockGetContainerReplicaCount(
+        HddsProtos.LifeCycleState.CLOSED,
+        IN_SERVICE,
+        IN_SERVICE,
+        IN_SERVICE);
+
+    monitor.run();
+
+    assertEquals(0, monitor.getTrackedNodeCount());
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONED,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+  }
+
+  @Test
+  public void testDecommissionAbortedWhenNodeInUnexpectedState()
+      throws NodeNotFoundException, ContainerNotFoundException {
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+            HddsProtos.NodeState.HEALTHY));
+
+    nodeManager.setContainers(dn1, generateContainers(3));
+    mockGetContainerReplicaCount(
+        HddsProtos.LifeCycleState.CLOSED,
+        DECOMMISSIONED,
+        IN_SERVICE,
+        IN_SERVICE);
+
+    // Add the node to the monitor, it should have 3 under-replicated containers
+    // after the first run
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    DatanodeDetails node = getFirstTrackedNode();
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+
+    // Set the node to dead, and then the workflow should get aborted, setting
+    // the node state back to IN_SERVICE on the next run.
+    nodeManager.setNodeStatus(dn1,
+        new NodeStatus(IN_SERVICE,
+            HddsProtos.NodeState.HEALTHY));
+    monitor.run();
+    assertEquals(0, monitor.getTrackedNodeCount());
+    assertEquals(IN_SERVICE,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+  }
+
+  @Test
+  public void testDecommissionAbortedWhenNodeGoesDead()
+      throws NodeNotFoundException, ContainerNotFoundException {
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+            HddsProtos.NodeState.HEALTHY));
+
+    nodeManager.setContainers(dn1, generateContainers(3));
+    mockGetContainerReplicaCount(
+        HddsProtos.LifeCycleState.CLOSED,
+        DECOMMISSIONED, IN_SERVICE, IN_SERVICE);
+
+    // Add the node to the monitor, it should have 3 under-replicated containers
+    // after the first run
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    DatanodeDetails node = getFirstTrackedNode();
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+
+    // Set the node to dead, and then the workflow should get aborted, setting
+    // the node state back to IN_SERVICE.
+    nodeManager.setNodeStatus(dn1,
+        new NodeStatus(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+            HddsProtos.NodeState.DEAD));
+    monitor.run();
+    assertEquals(0, monitor.getTrackedNodeCount());
+    assertEquals(IN_SERVICE,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+  }
+
+  @Test
+  public void testMaintenanceWaitsForMaintenanceToComplete()
+      throws NodeNotFoundException {
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(ENTERING_MAINTENANCE,
+            HddsProtos.NodeState.HEALTHY));
+
+    // Add the node to the monitor, it should transiting to
+    // IN_MAINTENANCE as there are no containers to replicate.
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    DatanodeDetails node = getFirstTrackedNode();
+    assertTrue(nodeManager.getNodeStatus(dn1).isInMaintenance());
+
+    // Running the monitor again causes the node to remain in maintenance
+    monitor.run();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    assertTrue(nodeManager.getNodeStatus(dn1).isInMaintenance());
+
+    // Set the maintenance end time to a time in the past and then the node
+    // should complete the workflow and transition to IN_SERVICE
+    nodeManager.setNodeOperationalState(node,
+        HddsProtos.NodeOperationalState.IN_MAINTENANCE, -1);
+    monitor.run();
+    assertEquals(0, monitor.getTrackedNodeCount());
+    assertEquals(IN_SERVICE,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+  }
+
+  @Test
+  public void testMaintenanceEndsClosingPipelines()
+      throws NodeNotFoundException {
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(ENTERING_MAINTENANCE,
+            HddsProtos.NodeState.HEALTHY));
+    // Ensure the node has some pipelines
+    nodeManager.setPipelines(dn1, 2);
+    // Add the node to the monitor
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    DatanodeDetails node = getFirstTrackedNode();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    assertTrue(nodeManager.getNodeStatus(dn1).isEnteringMaintenance());
+
+    // Set the maintenance end time to the past and the node should complete
+    // the workflow and return to IN_SERVICE
+    nodeManager.setNodeOperationalState(node,
+        HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE, -1);
+    monitor.run();
+    assertEquals(0, monitor.getTrackedNodeCount());
+    assertEquals(IN_SERVICE,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+  }
+
+  @Test
+  public void testMaintenanceEndsWhileReplicatingContainers()
+      throws ContainerNotFoundException, NodeNotFoundException {
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(ENTERING_MAINTENANCE,
+            HddsProtos.NodeState.HEALTHY));
+
+    nodeManager.setContainers(dn1, generateContainers(3));
+    mockGetContainerReplicaCount(
+        HddsProtos.LifeCycleState.CLOSED,
+        IN_MAINTENANCE,
+        ENTERING_MAINTENANCE,
+        IN_MAINTENANCE);
+
+    // Add the node to the monitor, it should transiting to
+    // REPLICATE_CONTAINERS as the containers are under-replicated for
+    // maintenance.
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    DatanodeDetails node = getFirstTrackedNode();
+    assertTrue(nodeManager.getNodeStatus(dn1).isEnteringMaintenance());
+
+    nodeManager.setNodeOperationalState(node,
+        HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE, -1);
+    monitor.run();
+    assertEquals(0, monitor.getTrackedNodeCount());
+    assertEquals(IN_SERVICE,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+  }
+
+  @Test
+  public void testDeadMaintenanceNodeDoesNotAbortWorkflow()
+      throws NodeNotFoundException {
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(ENTERING_MAINTENANCE,
+            HddsProtos.NodeState.HEALTHY));
+
+    // Add the node to the monitor, it should transiting to
+    // AWAIT_MAINTENANCE_END as there are no under-replicated containers.
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    DatanodeDetails node = getFirstTrackedNode();
+    assertTrue(nodeManager.getNodeStatus(dn1).isInMaintenance());
+
+    // Set the node dead and ensure the workflow does not end
+    NodeStatus status = nodeManager.getNodeStatus(dn1);
+    nodeManager.setNodeStatus(dn1, new NodeStatus(
+        status.getOperationalState(), HddsProtos.NodeState.DEAD));
+
+    // Running the monitor again causes the node to remain in maintenance
+    monitor.run();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    assertTrue(nodeManager.getNodeStatus(dn1).isInMaintenance());
+  }
+
+  @Test
+  public void testCancelledNodesMovedToInService()
+      throws NodeNotFoundException {
+    DatanodeDetails dn1 = MockDatanodeDetails.randomDatanodeDetails();
+    nodeManager.register(dn1,
+        new NodeStatus(ENTERING_MAINTENANCE,
+            HddsProtos.NodeState.HEALTHY));
+
+    // Add the node to the monitor, it should transiting to
+    // AWAIT_MAINTENANCE_END as there are no under-replicated containers.
+    monitor.startMonitoring(dn1);
+    monitor.run();
+    assertEquals(1, monitor.getTrackedNodeCount());
+    DatanodeDetails node = getFirstTrackedNode();
+    assertTrue(nodeManager.getNodeStatus(dn1).isInMaintenance());
+
+    // Now cancel the node and run the monitor, the node should be IN_SERVICE
+    monitor.stopMonitoring(dn1);
+    monitor.run();
+    assertEquals(0, monitor.getTrackedNodeCount());
+    assertEquals(IN_SERVICE,
+        nodeManager.getNodeStatus(dn1).getOperationalState());
+  }
+
+  /**
+   * Generate a set of ContainerID, starting from an ID of zero up to the given
+   * count minus 1.
+   * @param count The number of ContainerID objects to generate.
+   * @return A Set of ContainerID objects.
+   */
+  private Set<ContainerID> generateContainers(int count) {
+    Set<ContainerID> containers = new HashSet<>();
+    for (int i=0; i<count; i++) {
+      containers.add(new ContainerID(i));
+    }
+    return containers;
+  }
+
+  /**
+   * Create a ContainerReplicaCount object, including a container with the
+   * requested ContainerID and state, along with a set of replicas of the given
+   * states.
+   * @param containerID The ID of the container to create an included
+   * @param containerState The state of the container
+   * @param states Create a replica for each of the given states.
+   * @return A ContainerReplicaCount containing the generated container and
+   *         replica set
+   */
+  private ContainerReplicaCount generateReplicaCount(ContainerID containerID,
+      HddsProtos.LifeCycleState containerState,
+      HddsProtos.NodeOperationalState...states) {
+    Set<ContainerReplica> replicas = new HashSet<>();
+    for (HddsProtos.NodeOperationalState s : states) {
+      replicas.add(generateReplica(containerID, s, CLOSED));
+    }
+    ContainerInfo container = new ContainerInfo.Builder()
+        .setContainerID(containerID.getId())
+        .setState(containerState)
+        .build();
+
+    return new ContainerReplicaCount(container, replicas, 0, 0, 3, 2);
+  }
+
+  /**
+   * Generate a new ContainerReplica with the given containerID and State.
+   * @param containerID The ID the replica is associated with
+   * @param nodeState The persistedOpState stored in datanodeDetails.
+   * @param replicaState The state of the generated replica.
+   * @return A containerReplica with the given ID and state
+   */
+  private ContainerReplica generateReplica(ContainerID containerID,
+      HddsProtos.NodeOperationalState nodeState,
+      ContainerReplicaProto.State replicaState) {
+    DatanodeDetails dn = MockDatanodeDetails.randomDatanodeDetails();
+    dn.setPersistedOpState(nodeState);
+    return ContainerReplica.newBuilder()
+        .setContainerState(replicaState)
+        .setContainerID(containerID)
+        .setSequenceId(1)
+        .setDatanodeDetails(dn)
+        .build();
+  }
+
+  /**
+   * Helper method to get the first node from the set of trackedNodes within
+   * the monitor.
+   * @return DatanodeAdminNodeDetails for the first tracked node found.
+   */
+  private DatanodeDetails getFirstTrackedNode() {
+    return
+        monitor.getTrackedNodes().toArray(new DatanodeDetails[0])[0];
+  }
+
+  /**
+   * The only interaction the DatanodeAdminMonitor has with the
+   * ReplicationManager, is to request a ContainerReplicaCount object for each
+   * container on nodes being deocmmissioned or moved to maintenance. This
+   * method mocks that interface to return a ContainerReplicaCount with a
+   * container in the given containerState and a set of replias in the given
+   * replicaStates.
+   * @param containerState
+   * @param replicaStates
+   * @throws ContainerNotFoundException
+   */
+  private void mockGetContainerReplicaCount(
+      HddsProtos.LifeCycleState containerState,
+      HddsProtos.NodeOperationalState...replicaStates)
+      throws ContainerNotFoundException {
+    reset(repManager);
+    Mockito.when(repManager.getContainerReplicaCount(
+        Mockito.any(ContainerID.class)))
+        .thenAnswer(invocation ->
+            generateReplicaCount((ContainerID)invocation.getArguments()[0],
+                containerState, replicaStates));
+  }
+
+  /**
+   * This simple internal class is used to track and handle any DatanodeAdmin
+   * events fired by the DatanodeAdminMonitor during tests.
+   */
+  private class DatanodeAdminHandler implements
+      EventHandler<DatanodeDetails> {
+
+    private AtomicInteger invocation = new AtomicInteger(0);
+
+    @Override
+    public void onMessage(final DatanodeDetails dn,
+                          final EventPublisher publisher) {
+      invocation.incrementAndGet();
+    }
+
+    public int getInvocation() {
+      return invocation.get();
+    }
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java
index 3e725ce..23ca76b 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java
@@ -35,6 +35,7 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.ContainerReplicaProto;
 import org.apache.hadoop.hdds.protocol.proto
@@ -42,6 +43,7 @@
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.StorageReportProto;
 import org.apache.hadoop.hdds.scm.HddsTestUtils;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.TestUtils;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.container.ContainerManager;
@@ -90,6 +92,7 @@
     OzoneConfiguration conf = new OzoneConfiguration();
     conf.setTimeDuration(HddsConfigKeys.HDDS_SCM_WAIT_TIME_AFTER_SAFE_MODE_EXIT,
         0, TimeUnit.SECONDS);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 2);
     storageDir = GenericTestUtils.getTempPath(
         TestDeadNodeHandler.class.getSimpleName() + UUID.randomUUID());
     conf.set(HddsConfigKeys.OZONE_METADATA_DIRS, storageDir);
@@ -191,22 +194,44 @@
     TestUtils.closeContainer(containerManager, container2.containerID());
     TestUtils.quasiCloseContainer(containerManager, container3.containerID());
 
+    // First set the node to IN_MAINTENANCE and ensure the container replicas
+    // are not removed on the dead event
+    nodeManager.setNodeOperationalState(datanode1,
+        HddsProtos.NodeOperationalState.IN_MAINTENANCE);
     deadNodeHandler.onMessage(datanode1, publisher);
 
     Set<ContainerReplica> container1Replicas = containerManager
         .getContainerReplicas(new ContainerID(container1.getContainerID()));
+    Assert.assertEquals(2, container1Replicas.size());
+
+    Set<ContainerReplica> container2Replicas = containerManager
+        .getContainerReplicas(new ContainerID(container2.getContainerID()));
+    Assert.assertEquals(2, container2Replicas.size());
+
+    Set<ContainerReplica> container3Replicas = containerManager
+            .getContainerReplicas(new ContainerID(container3.getContainerID()));
+    Assert.assertEquals(1, container3Replicas.size());
+
+    // Now set the node to anything other than IN_MAINTENANCE and the relevant
+    // replicas should be removed
+    nodeManager.setNodeOperationalState(datanode1,
+        HddsProtos.NodeOperationalState.IN_SERVICE);
+    deadNodeHandler.onMessage(datanode1, publisher);
+
+    container1Replicas = containerManager
+        .getContainerReplicas(new ContainerID(container1.getContainerID()));
     Assert.assertEquals(1, container1Replicas.size());
     Assert.assertEquals(datanode2,
         container1Replicas.iterator().next().getDatanodeDetails());
 
-    Set<ContainerReplica> container2Replicas = containerManager
+    container2Replicas = containerManager
         .getContainerReplicas(new ContainerID(container2.getContainerID()));
     Assert.assertEquals(1, container2Replicas.size());
     Assert.assertEquals(datanode2,
         container2Replicas.iterator().next().getDatanodeDetails());
 
-    Set<ContainerReplica> container3Replicas = containerManager
-            .getContainerReplicas(new ContainerID(container3.getContainerID()));
+    container3Replicas = containerManager
+        .getContainerReplicas(new ContainerID(container3.getContainerID()));
     Assert.assertEquals(1, container3Replicas.size());
     Assert.assertEquals(datanode3,
         container3Replicas.iterator().next().getDatanodeDetails());
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeDecommissionManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeDecommissionManager.java
new file mode 100644
index 0000000..5f160c9
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeDecommissionManager.java
@@ -0,0 +1,297 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.HddsTestUtils;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Before;
+import org.junit.Test;
+import java.io.IOException;
+import java.util.List;
+import java.util.UUID;
+import java.util.Arrays;
+import java.util.ArrayList;
+import static junit.framework.TestCase.assertEquals;
+import static org.assertj.core.api.Fail.fail;
+import static org.junit.Assert.assertNotEquals;
+
+/**
+ * Unit tests for the decommision manager.
+ */
+
+public class TestNodeDecommissionManager {
+
+  private NodeDecommissionManager decom;
+  private StorageContainerManager scm;
+  private NodeManager nodeManager;
+  private OzoneConfiguration conf;
+  private String storageDir;
+
+  @Before
+  public void setup() throws Exception {
+    conf = new OzoneConfiguration();
+    storageDir = GenericTestUtils.getTempPath(
+        TestDeadNodeHandler.class.getSimpleName() + UUID.randomUUID());
+    conf.set(HddsConfigKeys.OZONE_METADATA_DIRS, storageDir);
+    nodeManager = createNodeManager(conf);
+    decom = new NodeDecommissionManager(
+        conf, nodeManager, null, null, null);
+  }
+
+  @Test
+  public void testHostStringsParseCorrectly()
+      throws InvalidHostStringException {
+    NodeDecommissionManager.HostDefinition def =
+        new NodeDecommissionManager.HostDefinition("foobar");
+    assertEquals("foobar", def.getHostname());
+    assertEquals(-1, def.getPort());
+
+    def = new NodeDecommissionManager.HostDefinition(" foobar ");
+    assertEquals("foobar", def.getHostname());
+    assertEquals(-1, def.getPort());
+
+    def = new NodeDecommissionManager.HostDefinition("foobar:1234");
+    assertEquals("foobar", def.getHostname());
+    assertEquals(1234, def.getPort());
+
+    def = new NodeDecommissionManager.HostDefinition(
+        "foobar.mycompany.com:1234");
+    assertEquals("foobar.mycompany.com", def.getHostname());
+    assertEquals(1234, def.getPort());
+
+    try {
+      def = new NodeDecommissionManager.HostDefinition("foobar:abcd");
+      fail("InvalidHostStringException should have been thrown");
+    } catch (InvalidHostStringException e) {
+    }
+  }
+
+  @Test
+  public void testAnyInvalidHostThrowsException()
+      throws InvalidHostStringException{
+    List<DatanodeDetails> dns = generateDatanodes();
+
+    // Try to decommission a host that does exist, but give incorrect port
+    try {
+      decom.decommissionNodes(Arrays.asList(dns.get(1).getIpAddress()+":10"));
+      fail("InvalidHostStringException expected");
+    } catch (InvalidHostStringException e) {
+    }
+
+    // Try to decommission a host that does not exist
+    try {
+      decom.decommissionNodes(Arrays.asList("123.123.123.123"));
+      fail("InvalidHostStringException expected");
+    } catch (InvalidHostStringException e) {
+    }
+
+    // Try to decommission a host that does exist and a host that does not
+    try {
+      decom.decommissionNodes(Arrays.asList(
+          dns.get(1).getIpAddress(), "123,123,123,123"));
+      fail("InvalidHostStringException expected");
+    } catch (InvalidHostStringException e) {
+    }
+
+    // Try to decommission a host with many DNs on the address with no port
+    try {
+      decom.decommissionNodes(Arrays.asList(
+          dns.get(0).getIpAddress()));
+      fail("InvalidHostStringException expected");
+    } catch (InvalidHostStringException e) {
+    }
+
+    // Try to decommission a host with many DNs on the address with a port
+    // that does not exist
+    try {
+      decom.decommissionNodes(Arrays.asList(
+          dns.get(0).getIpAddress()+":10"));
+      fail("InvalidHostStringException expected");
+    } catch (InvalidHostStringException e) {
+    }
+  }
+
+  @Test
+  public void testNodesCanBeDecommissionedAndRecommissioned()
+      throws InvalidHostStringException, NodeNotFoundException {
+    List<DatanodeDetails> dns = generateDatanodes();
+
+    // Decommission 2 valid nodes
+    decom.decommissionNodes(Arrays.asList(dns.get(1).getIpAddress(),
+        dns.get(2).getIpAddress()));
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dns.get(1)).getOperationalState());
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dns.get(2)).getOperationalState());
+
+    // Running the command again gives no error - nodes already decommissioning
+    // are silently ignored.
+    decom.decommissionNodes(Arrays.asList(dns.get(1).getIpAddress(),
+        dns.get(2).getIpAddress()));
+
+    // Attempt to decommission dn(10) which has multiple hosts on the same IP
+    // and we hardcoded ports to 3456, 4567, 5678
+    DatanodeDetails multiDn = dns.get(10);
+    String multiAddr =
+        multiDn.getIpAddress()+":"+multiDn.getPorts().get(0).getValue();
+    decom.decommissionNodes(Arrays.asList(multiAddr));
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(multiDn).getOperationalState());
+
+    // Recommission all 3 hosts
+    decom.recommissionNodes(Arrays.asList(
+        multiAddr, dns.get(1).getIpAddress(), dns.get(2).getIpAddress()));
+    decom.getMonitor().run();
+    assertEquals(HddsProtos.NodeOperationalState.IN_SERVICE,
+        nodeManager.getNodeStatus(dns.get(1)).getOperationalState());
+    assertEquals(HddsProtos.NodeOperationalState.IN_SERVICE,
+        nodeManager.getNodeStatus(dns.get(2)).getOperationalState());
+    assertEquals(HddsProtos.NodeOperationalState.IN_SERVICE,
+        nodeManager.getNodeStatus(dns.get(10)).getOperationalState());
+  }
+
+  @Test
+  public void testNodesCanBePutIntoMaintenanceAndRecommissioned()
+      throws InvalidHostStringException, NodeNotFoundException {
+    List<DatanodeDetails> dns = generateDatanodes();
+
+    // Put 2 valid nodes into maintenance
+    decom.startMaintenanceNodes(Arrays.asList(dns.get(1).getIpAddress(),
+        dns.get(2).getIpAddress()), 100);
+    assertEquals(HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE,
+        nodeManager.getNodeStatus(dns.get(1)).getOperationalState());
+    assertNotEquals(0, nodeManager.getNodeStatus(
+        dns.get(1)).getOpStateExpiryEpochSeconds());
+    assertEquals(HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE,
+        nodeManager.getNodeStatus(dns.get(2)).getOperationalState());
+    assertNotEquals(0, nodeManager.getNodeStatus(
+        dns.get(2)).getOpStateExpiryEpochSeconds());
+
+    // Running the command again gives no error - nodes already decommissioning
+    // are silently ignored.
+    decom.startMaintenanceNodes(Arrays.asList(dns.get(1).getIpAddress(),
+        dns.get(2).getIpAddress()), 100);
+
+    // Attempt to decommission dn(10) which has multiple hosts on the same IP
+    // and we hardcoded ports to 3456, 4567, 5678
+    DatanodeDetails multiDn = dns.get(10);
+    String multiAddr =
+        multiDn.getIpAddress()+":"+multiDn.getPorts().get(0).getValue();
+    decom.startMaintenanceNodes(Arrays.asList(multiAddr), 100);
+    assertEquals(HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE,
+        nodeManager.getNodeStatus(multiDn).getOperationalState());
+
+    // Recommission all 3 hosts
+    decom.recommissionNodes(Arrays.asList(
+        multiAddr, dns.get(1).getIpAddress(), dns.get(2).getIpAddress()));
+    decom.getMonitor().run();
+    assertEquals(HddsProtos.NodeOperationalState.IN_SERVICE,
+        nodeManager.getNodeStatus(dns.get(1)).getOperationalState());
+    assertEquals(HddsProtos.NodeOperationalState.IN_SERVICE,
+        nodeManager.getNodeStatus(dns.get(2)).getOperationalState());
+    assertEquals(HddsProtos.NodeOperationalState.IN_SERVICE,
+        nodeManager.getNodeStatus(dns.get(10)).getOperationalState());
+  }
+
+  @Test
+  public void testNodesCannotTransitionFromDecomToMaint() throws Exception {
+    List<DatanodeDetails> dns = generateDatanodes();
+
+    // Put 1 node into maintenance and another into decom
+    decom.startMaintenance(dns.get(1), 100);
+    decom.startDecommission(dns.get(2));
+    assertEquals(HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE,
+        nodeManager.getNodeStatus(dns.get(1)).getOperationalState());
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dns.get(2)).getOperationalState());
+
+    // Try to go from maint to decom:
+    try {
+      decom.startDecommission(dns.get(1));
+      fail("Expected InvalidNodeStateException");
+    } catch (InvalidNodeStateException e) {
+    }
+
+    // Try to go from decom to maint:
+    try {
+      decom.startMaintenance(dns.get(2), 100);
+      fail("Expected InvalidNodeStateException");
+    } catch (InvalidNodeStateException e) {
+    }
+
+    // Ensure the states are still as before
+    assertEquals(HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE,
+        nodeManager.getNodeStatus(dns.get(1)).getOperationalState());
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONING,
+        nodeManager.getNodeStatus(dns.get(2)).getOperationalState());
+  }
+
+
+
+  private SCMNodeManager createNodeManager(OzoneConfiguration config)
+      throws IOException, AuthenticationException {
+    scm = HddsTestUtils.getScm(config);
+    return (SCMNodeManager) scm.getScmNodeManager();
+  }
+
+  /**
+   * Generate a list of random DNs and return the list. A total of 11 DNs will
+   * be generated and registered with the node manager. Index 0 and 10 will
+   * have the same IP and host and the rest will have unique IPs and Hosts.
+   * The DN at index 10, has 3 hard coded ports of 3456, 4567, 5678. All other
+   * DNs will have ports set to 0.
+   * @return The list of DatanodeDetails Generated
+   */
+  private List<DatanodeDetails> generateDatanodes() {
+    List<DatanodeDetails> dns = new ArrayList<>();
+    for (int i=0; i<10; i++) {
+      DatanodeDetails dn = MockDatanodeDetails.randomDatanodeDetails();
+      dns.add(dn);
+      nodeManager.register(dn, null, null, null);
+    }
+    // We have 10 random DNs, we want to create another one that is on the same
+    // host as some of the others.
+    DatanodeDetails multiDn = dns.get(0);
+
+    DatanodeDetails.Builder builder = DatanodeDetails.newBuilder();
+    builder.setUuid(UUID.randomUUID())
+        .setHostName(multiDn.getHostName())
+        .setIpAddress(multiDn.getIpAddress())
+        .addPort(DatanodeDetails.newPort(
+            DatanodeDetails.Port.Name.STANDALONE, 3456))
+        .addPort(DatanodeDetails.newPort(
+            DatanodeDetails.Port.Name.RATIS, 4567))
+        .addPort(DatanodeDetails.newPort(
+            DatanodeDetails.Port.Name.REST, 5678))
+        .setNetworkLocation(multiDn.getNetworkLocation());
+
+    DatanodeDetails dn = builder.build();
+    nodeManager.register(dn, null, null, null);
+    dns.add(dn);
+    return dns;
+  }
+
+}
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeStateManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeStateManager.java
new file mode 100644
index 0000000..56637bd
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeStateManager.java
@@ -0,0 +1,320 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.upgrade.HDDSLayoutVersionManager;
+import org.apache.hadoop.hdds.utils.HddsServerUtil;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.scm.node.states.NodeAlreadyExistsException;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.server.events.Event;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.upgrade.LayoutVersionManager;
+import org.apache.hadoop.util.Time;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.UUID;
+
+import static junit.framework.TestCase.assertEquals;
+
+/**
+ * Class to test the NodeStateManager, which is an internal class used by
+ * the SCMNodeManager.
+ */
+
+public class TestNodeStateManager {
+
+  private NodeStateManager nsm;
+  private ConfigurationSource conf;
+  private MockEventPublisher eventPublisher;
+  private static final int TEST_SOFTWARE_LAYOUT_VERSION = 0;
+  private static final int TEST_METADATA_LAYOUT_VERSION = 0;
+
+  @Before
+  public void setUp() {
+    conf = new ConfigurationSource() {
+      @Override
+      public String get(String key) {
+        return null;
+      }
+
+      @Override
+      public Collection<String> getConfigKeys() {
+        return null;
+      }
+
+      @Override
+      public char[] getPassword(String key) throws IOException {
+        return new char[0];
+      }
+    };
+    eventPublisher = new MockEventPublisher();
+    LayoutVersionManager mockVersionManager =
+        Mockito.mock(HDDSLayoutVersionManager.class);
+    Mockito.when(mockVersionManager.getMetadataLayoutVersion())
+        .thenReturn(TEST_METADATA_LAYOUT_VERSION);
+    Mockito.when(mockVersionManager.getSoftwareLayoutVersion())
+        .thenReturn(TEST_SOFTWARE_LAYOUT_VERSION);
+    nsm = new NodeStateManager(conf, eventPublisher, mockVersionManager);
+  }
+
+  @After
+  public void tearDown() {
+  }
+
+  @Test
+  public void testNodeCanBeAddedAndRetrieved()
+      throws NodeAlreadyExistsException, NodeNotFoundException {
+    // Create a datanode, then add and retrieve it
+    DatanodeDetails dn = generateDatanode();
+    nsm.addNode(dn, null);
+    assertEquals(dn.getUuid(), nsm.getNode(dn).getUuid());
+    // Now get the status of the newly added node and it should be
+    // IN_SERVICE and HEALTHY
+    NodeStatus expectedState = NodeStatus.inServiceHealthyReadOnly();
+    assertEquals(expectedState, nsm.getNodeStatus(dn));
+  }
+
+  @Test
+  public void testGetAllNodesReturnsCorrectly()
+      throws NodeAlreadyExistsException {
+    DatanodeDetails dn = generateDatanode();
+    nsm.addNode(dn, null);
+    dn = generateDatanode();
+    nsm.addNode(dn, null);
+    assertEquals(2, nsm.getAllNodes().size());
+    assertEquals(2, nsm.getTotalNodeCount());
+  }
+
+  @Test
+  public void testGetNodeCountReturnsCorrectly()
+      throws NodeAlreadyExistsException {
+    DatanodeDetails dn = generateDatanode();
+    nsm.addNode(dn, null);
+    assertEquals(1, nsm.getNodes(NodeStatus.inServiceHealthyReadOnly()).size());
+    assertEquals(0, nsm.getNodes(NodeStatus.inServiceStale()).size());
+  }
+
+  @Test
+  public void testGetNodeCount() throws NodeAlreadyExistsException {
+    DatanodeDetails dn = generateDatanode();
+    nsm.addNode(dn, null);
+    assertEquals(1, nsm.getNodeCount(
+        NodeStatus.inServiceHealthyReadOnly()));
+    assertEquals(0, nsm.getNodeCount(NodeStatus.inServiceStale()));
+  }
+
+  @Test
+  public void testNodesMarkedDeadAndStale()
+      throws NodeAlreadyExistsException, NodeNotFoundException {
+    long now = Time.monotonicNow();
+
+    // Set the dead and stale limits to be 1 second larger than configured
+    long staleLimit = HddsServerUtil.getStaleNodeInterval(conf) + 1000;
+    long deadLimit = HddsServerUtil.getDeadNodeInterval(conf) + 1000;
+
+    DatanodeDetails staleDn = generateDatanode();
+    nsm.addNode(staleDn, null);
+    nsm.getNode(staleDn).updateLastHeartbeatTime(now - staleLimit);
+
+    DatanodeDetails deadDn = generateDatanode();
+    nsm.addNode(deadDn, null);
+    nsm.getNode(deadDn).updateLastHeartbeatTime(now - deadLimit);
+
+    DatanodeDetails healthyDn = generateDatanode();
+    nsm.addNode(healthyDn, null);
+    nsm.getNode(healthyDn).updateLastHeartbeatTime();
+
+    nsm.checkNodesHealth();
+    assertEquals(healthyDn, nsm.getHealthyNodes().get(0));
+    // A node cannot go directly to dead. It must be marked stale first
+    // due to the allowed state transitions. Therefore we will initially have 2
+    // stale nodesCheck it is in stale nodes
+    assertEquals(2, nsm.getStaleNodes().size());
+    // Now check health again and it should be in deadNodes()
+    nsm.checkNodesHealth();
+    assertEquals(staleDn, nsm.getStaleNodes().get(0));
+    assertEquals(deadDn, nsm.getDeadNodes().get(0));
+  }
+
+  @Test
+  public void testNodeCanTransitionThroughHealthStatesAndFiresEvents()
+      throws NodeAlreadyExistsException, NodeNotFoundException {
+    long now = Time.monotonicNow();
+
+    // Set the dead and stale limits to be 1 second larger than configured
+    long staleLimit = HddsServerUtil.getStaleNodeInterval(conf) + 1000;
+    long deadLimit = HddsServerUtil.getDeadNodeInterval(conf) + 1000;
+
+    DatanodeDetails dn = generateDatanode();
+    nsm.addNode(dn, null);
+    assertEquals(SCMEvents.NEW_NODE, eventPublisher.getLastEvent());
+    DatanodeInfo dni = nsm.getNode(dn);
+    dni.updateLastHeartbeatTime();
+
+    // Ensure node is initially healthy
+    eventPublisher.clearEvents();
+    nsm.checkNodesHealth();
+    assertEquals(NodeState.HEALTHY, nsm.getNodeStatus(dn).getHealth());
+    assertEquals(SCMEvents.READ_ONLY_HEALTHY_TO_HEALTHY_NODE,
+        eventPublisher.getLastEvent());
+
+    // Set the heartbeat old enough to make it stale
+    dni.updateLastHeartbeatTime(now - staleLimit);
+    nsm.checkNodesHealth();
+    assertEquals(NodeState.STALE, nsm.getNodeStatus(dn).getHealth());
+    assertEquals(SCMEvents.STALE_NODE, eventPublisher.getLastEvent());
+
+    // Now make it dead
+    dni.updateLastHeartbeatTime(now - deadLimit);
+    nsm.checkNodesHealth();
+    assertEquals(NodeState.DEAD, nsm.getNodeStatus(dn).getHealth());
+    assertEquals(SCMEvents.DEAD_NODE, eventPublisher.getLastEvent());
+
+    // Transition back to healthy from dead
+    dni.updateLastHeartbeatTime();
+    nsm.checkNodesHealth();
+    assertEquals(NodeState.HEALTHY_READONLY, nsm.getNodeStatus(dn).getHealth());
+    assertEquals(SCMEvents.NON_HEALTHY_TO_READONLY_HEALTHY_NODE,
+        eventPublisher.getLastEvent());
+
+    // Make the node stale again, and transition to healthy.
+    dni.updateLastHeartbeatTime(now - staleLimit);
+    nsm.checkNodesHealth();
+    assertEquals(NodeState.STALE, nsm.getNodeStatus(dn).getHealth());
+    assertEquals(SCMEvents.STALE_NODE, eventPublisher.getLastEvent());
+    dni.updateLastHeartbeatTime();
+    nsm.checkNodesHealth();
+    assertEquals(NodeState.HEALTHY_READONLY, nsm.getNodeStatus(dn).getHealth());
+    assertEquals(SCMEvents.NON_HEALTHY_TO_READONLY_HEALTHY_NODE,
+        eventPublisher.getLastEvent());
+  }
+
+  @Test
+  public void testNodeOpStateCanBeSet()
+      throws NodeAlreadyExistsException, NodeNotFoundException {
+    DatanodeDetails dn = generateDatanode();
+    nsm.addNode(dn, null);
+
+    nsm.setNodeOperationalState(dn,
+        HddsProtos.NodeOperationalState.DECOMMISSIONED);
+
+    NodeStatus newStatus = nsm.getNodeStatus(dn);
+    assertEquals(HddsProtos.NodeOperationalState.DECOMMISSIONED,
+        newStatus.getOperationalState());
+    assertEquals(NodeState.HEALTHY_READONLY,
+        newStatus.getHealth());
+  }
+
+  @Test
+  public void testHealthEventsFiredWhenOpStateChanged()
+      throws NodeAlreadyExistsException, NodeNotFoundException {
+    DatanodeDetails dn = generateDatanode();
+    nsm.addNode(dn, null);
+
+    // First set the node to decommissioned, then run through all op states in
+    // order and ensure the non_healthy_to_healthy event gets fired
+    nsm.setNodeOperationalState(dn,
+        HddsProtos.NodeOperationalState.DECOMMISSIONED);
+    for (HddsProtos.NodeOperationalState s :
+        HddsProtos.NodeOperationalState.values()) {
+      eventPublisher.clearEvents();
+      nsm.setNodeOperationalState(dn, s);
+      assertEquals(SCMEvents.NON_HEALTHY_TO_READONLY_HEALTHY_NODE,
+          eventPublisher.getLastEvent());
+    }
+
+    // Now make the node stale and run through all states again ensuring the
+    // stale event gets fired
+    long now = Time.monotonicNow();
+    long staleLimit = HddsServerUtil.getStaleNodeInterval(conf) + 1000;
+    long deadLimit = HddsServerUtil.getDeadNodeInterval(conf) + 1000;
+    DatanodeInfo dni = nsm.getNode(dn);
+    dni.updateLastHeartbeatTime(now - staleLimit);
+    nsm.checkNodesHealth();
+    assertEquals(NodeState.STALE, nsm.getNodeStatus(dn).getHealth());
+    nsm.setNodeOperationalState(dn,
+        HddsProtos.NodeOperationalState.DECOMMISSIONED);
+    for (HddsProtos.NodeOperationalState s :
+        HddsProtos.NodeOperationalState.values()) {
+      eventPublisher.clearEvents();
+      nsm.setNodeOperationalState(dn, s);
+      assertEquals(SCMEvents.STALE_NODE, eventPublisher.getLastEvent());
+    }
+
+    // Finally make the node dead and run through all the op states again
+    dni.updateLastHeartbeatTime(now - deadLimit);
+    nsm.checkNodesHealth();
+    assertEquals(NodeState.DEAD, nsm.getNodeStatus(dn).getHealth());
+    nsm.setNodeOperationalState(dn,
+        HddsProtos.NodeOperationalState.DECOMMISSIONED);
+    for (HddsProtos.NodeOperationalState s :
+        HddsProtos.NodeOperationalState.values()) {
+      eventPublisher.clearEvents();
+      nsm.setNodeOperationalState(dn, s);
+      assertEquals(SCMEvents.DEAD_NODE, eventPublisher.getLastEvent());
+    }
+  }
+
+  private DatanodeDetails generateDatanode() {
+    return DatanodeDetails.newBuilder().setUuid(UUID.randomUUID()).build();
+  }
+
+  static class  MockEventPublisher implements EventPublisher {
+
+    private List<Event> events = new ArrayList<>();
+    private List<Object> payloads = new ArrayList<>();
+
+    public void clearEvents() {
+      events.clear();
+      payloads.clear();
+    }
+
+    public List<Event> getEvents() {
+      return events;
+    }
+
+    public Event getLastEvent() {
+      if (events.size() == 0) {
+        return null;
+      } else {
+        return events.get(events.size()-1);
+      }
+    }
+
+    @Override
+    public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void
+        fireEvent(EVENT_TYPE event, PAYLOAD payload) {
+      events.add(event);
+      payloads.add(payload);
+    }
+  }
+
+}
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java
index b3f1e8f..9aea69b 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java
@@ -31,9 +31,9 @@
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.hdds.DFSConfigKeysLegacy;
 import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.LayoutVersionProto;
@@ -46,6 +46,7 @@
 import org.apache.hadoop.hdds.scm.TestUtils;
 import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
 import org.apache.hadoop.hdds.scm.net.NetworkTopology;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
 import org.apache.hadoop.hdds.scm.server.SCMStorageConfig;
 import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
@@ -57,9 +58,20 @@
 import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
 import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
 import org.apache.hadoop.ozone.upgrade.LayoutVersionManager;
+import org.apache.hadoop.ozone.protocol.commands.SetNodeOperationalStateCommand;
 import org.apache.hadoop.security.authentication.client.AuthenticationException;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+import java.util.Map;
 
 import static java.util.concurrent.TimeUnit.MILLISECONDS;
 import static java.util.concurrent.TimeUnit.SECONDS;
@@ -68,22 +80,20 @@
 import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_HEARTBEAT_INTERVAL;
 import static org.apache.hadoop.hdds.protocol.MockDatanodeDetails.createDatanodeDetails;
 import static org.apache.hadoop.hdds.protocol.MockDatanodeDetails.randomDatanodeDetails;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY_READONLY;
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
 import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type.finalizeNewLayoutVersionCommand;
 import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMRegisteredResponseProto.ErrorCode.errorNodeNotPermitted;
 import static org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.SCMRegisteredResponseProto.ErrorCode.success;
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DEADNODE_INTERVAL;
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_HEARTBEAT_PROCESS_INTERVAL;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_RATIS_PIPELINE_LIMIT;
-import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_STALENODE_INTERVAL;
 import static org.apache.hadoop.hdds.scm.TestUtils.getRandomPipelineReports;
 import static org.apache.hadoop.hdds.scm.events.SCMEvents.*;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys
+    .OZONE_SCM_DEADNODE_INTERVAL;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys
+    .OZONE_SCM_HEARTBEAT_PROCESS_INTERVAL;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys
+    .OZONE_SCM_STALENODE_INTERVAL;
 import static org.apache.hadoop.hdds.scm.events.SCMEvents.DATANODE_COMMAND;
-import org.junit.After;
-import org.junit.Assert;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 import static org.mockito.Mockito.mock;
@@ -91,11 +101,6 @@
 import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
 
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.ExpectedException;
 import org.mockito.ArgumentCaptor;
 import org.mockito.Mockito;
 
@@ -308,7 +313,13 @@
       }
       //TODO: wait for heartbeat to be processed
       Thread.sleep(4 * 1000);
-      assertEquals(count, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(count, nodeManager.getNodeCount(
+          NodeStatus.inServiceHealthy()));
+
+      Map<String, Map<String, Integer>> nodeCounts = nodeManager.getNodeCount();
+      assertEquals(count,
+          nodeCounts.get(HddsProtos.NodeOperationalState.IN_SERVICE.name())
+              .get(HddsProtos.NodeState.HEALTHY.name()).intValue());
     }
   }
 
@@ -336,6 +347,35 @@
   }
 
   /**
+   * Ensure that a change to the operationalState of a node fires a datanode
+   * event of type SetNodeOperationalStateCommand.
+   */
+  @Test
+  @Ignore // TODO - this test is no longer valid as the heartbeat processing
+          //        now generates the command message.
+  public void testSetNodeOpStateAndCommandFired()
+      throws IOException, NodeNotFoundException, AuthenticationException {
+    final int interval = 100;
+
+    OzoneConfiguration conf = getConf();
+    conf.setTimeDuration(OZONE_SCM_HEARTBEAT_PROCESS_INTERVAL, interval,
+        MILLISECONDS);
+
+    try (SCMNodeManager nodeManager = createNodeManager(conf)) {
+      DatanodeDetails dn = TestUtils.createRandomDatanodeAndRegister(
+          nodeManager);
+      long expiry = System.currentTimeMillis() / 1000 + 1000;
+      nodeManager.setNodeOperationalState(dn,
+          HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE, expiry);
+      List<SCMCommand> commands = nodeManager.getCommandQueue(dn.getUuid());
+
+      Assert.assertTrue(commands.get(0).getClass().equals(
+          SetNodeOperationalStateCommand.class));
+      assertEquals(1, commands.size());
+    }
+  }
+
+  /**
    * Asserts that a single node moves from Healthy to stale node, then from
    * stale node to dead node if it misses enough heartbeats.
    *
@@ -388,15 +428,21 @@
       // Wait for 2 seconds, wait a total of 4 seconds to make sure that the
       // node moves into stale state.
       Thread.sleep(2 * 1000);
-      List<DatanodeDetails> staleNodeList = nodeManager.getNodes(STALE);
+      List<DatanodeDetails> staleNodeList =
+          nodeManager.getNodes(NodeStatus.inServiceStale());
       assertEquals("Expected to find 1 stale node",
-          1, nodeManager.getNodeCount(STALE));
+          1, nodeManager.getNodeCount(NodeStatus.inServiceStale()));
       assertEquals("Expected to find 1 stale node",
           1, staleNodeList.size());
       assertEquals("Stale node is not the expected ID", staleNode
           .getUuid(), staleNodeList.get(0).getUuid());
       Thread.sleep(1000);
 
+      Map<String, Map<String, Integer>> nodeCounts = nodeManager.getNodeCount();
+      assertEquals(1,
+          nodeCounts.get(HddsProtos.NodeOperationalState.IN_SERVICE.name())
+              .get(HddsProtos.NodeState.STALE.name()).intValue());
+
       // heartbeat good nodes again.
       for (DatanodeDetails dn : nodeList) {
         nodeManager.processHeartbeat(dn, layoutInfo);
@@ -407,18 +453,26 @@
       Thread.sleep(2 * 1000);
 
       // the stale node has been removed
-      staleNodeList = nodeManager.getNodes(STALE);
+      staleNodeList = nodeManager.getNodes(NodeStatus.inServiceStale());
+      nodeCounts = nodeManager.getNodeCount();
       assertEquals("Expected to find 1 stale node",
-          0, nodeManager.getNodeCount(STALE));
+          0, nodeManager.getNodeCount(NodeStatus.inServiceStale()));
       assertEquals("Expected to find 1 stale node",
           0, staleNodeList.size());
+      assertEquals(0,
+          nodeCounts.get(HddsProtos.NodeOperationalState.IN_SERVICE.name())
+              .get(HddsProtos.NodeState.STALE.name()).intValue());
 
       // Check for the dead node now.
-      List<DatanodeDetails> deadNodeList = nodeManager.getNodes(DEAD);
+      List<DatanodeDetails> deadNodeList =
+          nodeManager.getNodes(NodeStatus.inServiceDead());
       assertEquals("Expected to find 1 dead node", 1,
-          nodeManager.getNodeCount(DEAD));
+          nodeManager.getNodeCount(NodeStatus.inServiceDead()));
       assertEquals("Expected to find 1 dead node",
           1, deadNodeList.size());
+      assertEquals(1,
+          nodeCounts.get(HddsProtos.NodeOperationalState.IN_SERVICE.name())
+              .get(HddsProtos.NodeState.DEAD.name()).intValue());
       assertEquals("Dead node is not the expected ID", staleNode
           .getUuid(), deadNodeList.get(0).getUuid());
     }
@@ -470,8 +524,8 @@
 
       //Assert all nodes are healthy.
       assertEquals(2, nodeManager.getAllNodes().size());
-      assertEquals(2, nodeManager.getNodeCount(HEALTHY));
-
+      assertEquals(2,
+          nodeManager.getNodeCount(NodeStatus.inServiceHealthy()));
       /**
        * Simulate a JVM Pause and subsequent handling in following steps:
        * Step 1 : stop heartbeat check process for stale node interval
@@ -506,7 +560,7 @@
 
       // Step 4 : all nodes should still be HEALTHY
       assertEquals(2, nodeManager.getAllNodes().size());
-      assertEquals(2, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(2, nodeManager.getNodeCount(NodeStatus.inServiceHealthy()));
 
       // Step 5 : heartbeat for node1
       nodeManager.processHeartbeat(node1, layoutInfo);
@@ -515,8 +569,8 @@
       Thread.sleep(1000);
 
       // Step 7 : node2 should transition to STALE
-      assertEquals(1, nodeManager.getNodeCount(HEALTHY));
-      assertEquals(1, nodeManager.getNodeCount(STALE));
+      assertEquals(1, nodeManager.getNodeCount(NodeStatus.inServiceHealthy()));
+      assertEquals(1, nodeManager.getNodeCount(NodeStatus.inServiceStale()));
     }
   }
 
@@ -681,7 +735,7 @@
 
       //Assert all nodes are healthy.
       assertEquals(3, nodeManager.getAllNodes().size());
-      assertEquals(3, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(3, nodeManager.getNodeCount(NodeStatus.inServiceHealthy()));
 
       /**
        * Cluster state: Quiesced: We are going to sleep for 3 seconds. Which
@@ -689,7 +743,7 @@
        */
       Thread.sleep(3 * 1000);
       assertEquals(3, nodeManager.getAllNodes().size());
-      assertEquals(3, nodeManager.getNodeCount(STALE));
+      assertEquals(3, nodeManager.getNodeCount(NodeStatus.inServiceStale()));
 
 
       /**
@@ -707,18 +761,19 @@
       Thread.sleep(1500);
       nodeManager.processHeartbeat(healthyNode, layoutInfo);
       Thread.sleep(2 * 1000);
-      assertEquals(1, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(1, nodeManager.getNodeCount(NodeStatus.inServiceHealthy()));
 
 
       // 3.5 seconds from last heartbeat for the stale and deadNode. So those
       //  2 nodes must move to Stale state and the healthy node must
       // remain in the healthy State.
-      List<DatanodeDetails> healthyList = nodeManager.getNodes(HEALTHY);
+      List<DatanodeDetails> healthyList = nodeManager.getNodes(
+          NodeStatus.inServiceHealthy());
       assertEquals("Expected one healthy node", 1, healthyList.size());
       assertEquals("Healthy node is not the expected ID", healthyNode
           .getUuid(), healthyList.get(0).getUuid());
 
-      assertEquals(2, nodeManager.getNodeCount(STALE));
+      assertEquals(2, nodeManager.getNodeCount(NodeStatus.inServiceStale()));
 
       /**
        * Cluster State: Allow healthyNode to remain in healthy state and
@@ -734,14 +789,16 @@
       // 3.5 seconds have elapsed for stale node, so it moves into Stale.
       // 7 seconds have elapsed for dead node, so it moves into dead.
       // 2 Seconds have elapsed for healthy node, so it stays in healthy state.
-      healthyList = nodeManager.getNodes(HEALTHY);
-      List<DatanodeDetails> staleList = nodeManager.getNodes(STALE);
-      List<DatanodeDetails> deadList = nodeManager.getNodes(DEAD);
+      healthyList = nodeManager.getNodes((NodeStatus.inServiceHealthy()));
+      List<DatanodeDetails> staleList =
+          nodeManager.getNodes(NodeStatus.inServiceStale());
+      List<DatanodeDetails> deadList =
+          nodeManager.getNodes(NodeStatus.inServiceDead());
 
       assertEquals(3, nodeManager.getAllNodes().size());
-      assertEquals(1, nodeManager.getNodeCount(HEALTHY));
-      assertEquals(1, nodeManager.getNodeCount(STALE));
-      assertEquals(1, nodeManager.getNodeCount(DEAD));
+      assertEquals(1, nodeManager.getNodeCount(NodeStatus.inServiceHealthy()));
+      assertEquals(1, nodeManager.getNodeCount(NodeStatus.inServiceStale()));
+      assertEquals(1, nodeManager.getNodeCount(NodeStatus.inServiceDead()));
 
       assertEquals("Expected one healthy node",
           1, healthyList.size());
@@ -767,7 +824,7 @@
       Thread.sleep(500);
       //Assert all nodes are healthy.
       assertEquals(3, nodeManager.getAllNodes().size());
-      assertEquals(3, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(3, nodeManager.getNodeCount(NodeStatus.inServiceHealthy()));
     }
   }
 
@@ -821,7 +878,7 @@
    */
   private boolean findNodes(NodeManager nodeManager, int count,
       HddsProtos.NodeState state) {
-    return count == nodeManager.getNodeCount(state);
+    return count == nodeManager.getNodeCount(NodeStatus.inServiceStale());
   }
 
   /**
@@ -900,11 +957,14 @@
       // Assert all healthy nodes are healthy now, this has to be a greater
       // than check since Stale nodes can be healthy when we check the state.
 
-      assertTrue(nodeManager.getNodeCount(HEALTHY) >= healthyCount);
+      assertTrue(nodeManager.getNodeCount(NodeStatus.inServiceHealthy())
+          >= healthyCount);
 
-      assertEquals(deadCount, nodeManager.getNodeCount(DEAD));
+      assertEquals(deadCount,
+          nodeManager.getNodeCount(NodeStatus.inServiceDead()));
 
-      List<DatanodeDetails> deadList = nodeManager.getNodes(DEAD);
+      List<DatanodeDetails> deadList =
+          nodeManager.getNodes(NodeStatus.inServiceDead());
 
       for (DatanodeDetails node : deadList) {
         assertTrue(deadNodeList.contains(node));
@@ -1028,9 +1088,11 @@
       //TODO: wait for EventQueue to be processed
       eventQueue.processAll(8000L);
 
-      assertEquals(nodeCount, nodeManager.getNodeCount(HEALTHY_READONLY));
+      assertEquals(nodeCount, nodeManager.getNodeCount(
+          NodeStatus.inServiceHealthyReadOnly()));
       Thread.sleep(3 * 1000);
-      assertEquals(nodeCount, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(nodeCount, nodeManager.getNodeCount(
+          NodeStatus.inServiceHealthy()));
       assertEquals(capacity * nodeCount, (long) nodeManager.getStats()
           .getCapacity().get());
       assertEquals(used * nodeCount, (long) nodeManager.getStats()
@@ -1086,9 +1148,11 @@
       //TODO: wait for EventQueue to be processed
       eventQueue.processAll(8000L);
 
-      assertEquals(1, nodeManager.getNodeCount(HEALTHY_READONLY));
+      assertEquals(1, nodeManager.getNodeCount(
+          NodeStatus.inServiceHealthyReadOnly()));
       Thread.sleep(3 * 1000);
-      assertEquals(1, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(1, nodeManager
+          .getNodeCount(NodeStatus.inServiceHealthy()));
       assertEquals(volumeCount / 2,
               nodeManager.minHealthyVolumeNum(dnList));
       dnList.clear();
@@ -1187,7 +1251,7 @@
       // Wait up to 4s so that the node becomes stale
       // Verify the usage info should be unchanged.
       GenericTestUtils.waitFor(
-          () -> nodeManager.getNodeCount(STALE) == 1, 100,
+          () -> nodeManager.getNodeCount(NodeStatus.inServiceStale()) == 1, 100,
           4 * 1000);
       assertEquals(nodeCount, nodeManager.getNodeStats().size());
 
@@ -1205,7 +1269,7 @@
       // Wait up to 4 more seconds so the node becomes dead
       // Verify usage info should be updated.
       GenericTestUtils.waitFor(
-          () -> nodeManager.getNodeCount(DEAD) == 1, 100,
+          () -> nodeManager.getNodeCount(NodeStatus.inServiceDead()) == 1, 100,
           4 * 1000);
 
       assertEquals(0, nodeManager.getNodeStats().size());
@@ -1230,7 +1294,7 @@
       // Wait up to 5 seconds so that the dead node becomes healthy
       // Verify usage info should be updated.
       GenericTestUtils.waitFor(
-          () -> nodeManager.getNodeCount(HEALTHY) == 1,
+          () -> nodeManager.getNodeCount(NodeStatus.inServiceHealthy()) == 1,
           100, 5 * 1000);
       GenericTestUtils.waitFor(
           () -> nodeManager.getStats().getScmUsed().get() == expectedScmUsed,
@@ -1368,7 +1432,8 @@
       // verify network topology cluster has all the registered nodes
       Thread.sleep(4 * 1000);
       NetworkTopology clusterMap = scm.getClusterMap();
-      assertEquals(nodeCount, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(nodeCount,
+          nodeManager.getNodeCount(NodeStatus.inServiceHealthy()));
       assertEquals(nodeCount, clusterMap.getNumOfLeafNode(""));
       assertEquals(4, clusterMap.getMaxLevel());
       List<DatanodeDetails> nodeList = nodeManager.getAllNodes();
@@ -1412,7 +1477,8 @@
       // verify network topology cluster has all the registered nodes
       Thread.sleep(4 * 1000);
       NetworkTopology clusterMap = scm.getClusterMap();
-      assertEquals(nodeCount, nodeManager.getNodeCount(HEALTHY));
+      assertEquals(nodeCount,
+          nodeManager.getNodeCount(NodeStatus.inServiceHealthy()));
       assertEquals(nodeCount, clusterMap.getNumOfLeafNode(""));
       assertEquals(3, clusterMap.getMaxLevel());
       List<DatanodeDetails> nodeList = nodeManager.getAllNodes();
@@ -1432,6 +1498,64 @@
     }
   }
 
+  @Test
+  public void testGetNodeInfo()
+      throws IOException, InterruptedException, NodeNotFoundException,
+        AuthenticationException {
+    OzoneConfiguration conf = getConf();
+    final int nodeCount = 6;
+    SCMNodeManager nodeManager = createNodeManager(conf);
+
+    for (int i=0; i<nodeCount; i++) {
+      DatanodeDetails datanodeDetails =
+          MockDatanodeDetails.randomDatanodeDetails();
+      final long capacity = 2000;
+      final long used = 100;
+      final long remaining = 1900;
+      UUID dnId = datanodeDetails.getUuid();
+      String storagePath = testDir.getAbsolutePath() + "/" + dnId;
+      StorageReportProto report = TestUtils
+          .createStorageReport(dnId, storagePath, capacity, used,
+              remaining, null);
+
+
+      LayoutVersionManager versionManager =
+          nodeManager.getLayoutVersionManager();
+      LayoutVersionProto layoutInfo = LayoutVersionProto.newBuilder()
+          .setMetadataLayoutVersion(versionManager.getMetadataLayoutVersion())
+          .setSoftwareLayoutVersion(versionManager.getSoftwareLayoutVersion())
+          .build();
+      nodeManager.register(datanodeDetails, TestUtils.createNodeReport(report),
+          TestUtils.getRandomPipelineReports(), layoutInfo);
+      nodeManager.processHeartbeat(datanodeDetails, layoutInfo);
+      if (i == 5) {
+        nodeManager.setNodeOperationalState(datanodeDetails,
+            HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE);
+      }
+      if (i == 3 || i == 4) {
+        nodeManager.setNodeOperationalState(datanodeDetails,
+            HddsProtos.NodeOperationalState.DECOMMISSIONED);
+      }
+    }
+    Thread.sleep(100);
+
+    Map<String, Long> stats = nodeManager.getNodeInfo();
+    // 3 IN_SERVICE nodes:
+    assertEquals(6000, stats.get("DiskCapacity").longValue());
+    assertEquals(300, stats.get("DiskUsed").longValue());
+    assertEquals(5700, stats.get("DiskRemaining").longValue());
+
+    // 2 Decommissioned nodes
+    assertEquals(4000, stats.get("DecommissionedDiskCapacity").longValue());
+    assertEquals(200, stats.get("DecommissionedDiskUsed").longValue());
+    assertEquals(3800, stats.get("DecommissionedDiskRemaining").longValue());
+
+    // 1 Maintenance node
+    assertEquals(2000, stats.get("MaintenanceDiskCapacity").longValue());
+    assertEquals(100, stats.get("MaintenanceDiskUsed").longValue());
+    assertEquals(1900, stats.get("MaintenanceDiskRemaining").longValue());
+  }
+
   /**
    * Test add node into a 4-layer network topology during node register.
    */
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/states/TestNodeStateMap.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/states/TestNodeStateMap.java
new file mode 100644
index 0000000..d1c2286
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/states/TestNodeStateMap.java
@@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node.states;
+
+import java.util.List;
+import java.util.UUID;
+import java.util.concurrent.CountDownLatch;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.MockDatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
+
+import static junit.framework.TestCase.assertEquals;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Class to test the NodeStateMap class, which is an internal class used by
+ * NodeStateManager.
+ */
+
+public class TestNodeStateMap {
+
+  private NodeStateMap map;
+
+  @Before
+  public void setUp() {
+    map = new NodeStateMap();
+  }
+
+  @After
+  public void tearDown() {
+  }
+
+  @Test
+  public void testNodeCanBeAddedAndRetrieved()
+      throws NodeAlreadyExistsException, NodeNotFoundException {
+    DatanodeDetails dn = generateDatanode();
+    NodeStatus status = NodeStatus.inServiceHealthy();
+    map.addNode(dn, status, null);
+    assertEquals(dn, map.getNodeInfo(dn.getUuid()));
+    assertEquals(status, map.getNodeStatus(dn.getUuid()));
+  }
+
+  @Test
+  public void testNodeHealthStateCanBeUpdated()
+      throws NodeAlreadyExistsException, NodeNotFoundException {
+    DatanodeDetails dn = generateDatanode();
+    NodeStatus status = NodeStatus.inServiceHealthy();
+    map.addNode(dn, status, null);
+
+    NodeStatus expectedStatus = NodeStatus.inServiceStale();
+    NodeStatus returnedStatus =
+        map.updateNodeHealthState(dn.getUuid(), expectedStatus.getHealth());
+    assertEquals(expectedStatus, returnedStatus);
+    assertEquals(returnedStatus, map.getNodeStatus(dn.getUuid()));
+  }
+
+  @Test
+  public void testNodeOperationalStateCanBeUpdated()
+      throws NodeAlreadyExistsException, NodeNotFoundException {
+    DatanodeDetails dn = generateDatanode();
+    NodeStatus status = NodeStatus.inServiceHealthy();
+    map.addNode(dn, status, null);
+
+    NodeStatus expectedStatus = new NodeStatus(
+        NodeOperationalState.DECOMMISSIONING,
+        NodeState.HEALTHY, 999);
+    NodeStatus returnedStatus = map.updateNodeOperationalState(
+        dn.getUuid(), expectedStatus.getOperationalState(), 999);
+    assertEquals(expectedStatus, returnedStatus);
+    assertEquals(returnedStatus, map.getNodeStatus(dn.getUuid()));
+    assertEquals(999, returnedStatus.getOpStateExpiryEpochSeconds());
+  }
+
+  @Test
+  public void testGetNodeMethodsReturnCorrectCountsAndStates()
+      throws NodeAlreadyExistsException {
+    // Add one node for all possible states
+    int nodeCount = 0;
+    for(NodeOperationalState op : NodeOperationalState.values()) {
+      for(NodeState health : NodeState.values()) {
+        addRandomNodeWithState(op, health);
+        nodeCount++;
+      }
+    }
+    NodeStatus requestedState = NodeStatus.inServiceStale();
+    List<UUID> nodes = map.getNodes(requestedState);
+    assertEquals(1, nodes.size());
+    assertEquals(1, map.getNodeCount(requestedState));
+    assertEquals(nodeCount, map.getTotalNodeCount());
+    assertEquals(nodeCount, map.getAllNodes().size());
+    assertEquals(nodeCount, map.getAllDatanodeInfos().size());
+
+    // Checks for the getNodeCount(opstate, health) method
+    assertEquals(nodeCount, map.getNodeCount(null, null));
+    assertEquals(1,
+        map.getNodeCount(NodeOperationalState.DECOMMISSIONING,
+            NodeState.STALE));
+    assertEquals(5, map.getNodeCount(null, NodeState.HEALTHY));
+    assertEquals(4,
+        map.getNodeCount(NodeOperationalState.DECOMMISSIONING, null));
+  }
+
+  /**
+   * Test if container list is iterable even if it's modified from other thread.
+   */
+  @Test
+  public void testConcurrency() throws Exception {
+    NodeStateMap nodeStateMap = new NodeStateMap();
+
+    final DatanodeDetails datanodeDetails =
+        MockDatanodeDetails.randomDatanodeDetails();
+
+    nodeStateMap.addNode(datanodeDetails, NodeStatus.inServiceHealthy(), null);
+
+    UUID dnUuid = datanodeDetails.getUuid();
+
+    nodeStateMap.addContainer(dnUuid, new ContainerID(1L));
+    nodeStateMap.addContainer(dnUuid, new ContainerID(2L));
+    nodeStateMap.addContainer(dnUuid, new ContainerID(3L));
+
+    CountDownLatch elementRemoved = new CountDownLatch(1);
+    CountDownLatch loopStarted = new CountDownLatch(1);
+
+    new Thread(() -> {
+      try {
+        loopStarted.await();
+        nodeStateMap.removeContainer(dnUuid, new ContainerID(1L));
+        elementRemoved.countDown();
+      } catch (Exception e) {
+        e.printStackTrace();
+      }
+
+    }).start();
+
+    boolean first = true;
+    for (ContainerID key : nodeStateMap.getContainers(dnUuid)) {
+      if (first) {
+        loopStarted.countDown();
+        elementRemoved.await();
+      }
+      first = false;
+      System.out.println(key);
+    }
+  }
+
+  private void addNodeWithState(
+      DatanodeDetails dn,
+      NodeOperationalState opState, NodeState health
+  )
+      throws NodeAlreadyExistsException {
+    NodeStatus status = new NodeStatus(opState, health);
+    map.addNode(dn, status, null);
+  }
+
+  private void addRandomNodeWithState(
+      NodeOperationalState opState, NodeState health
+  )
+      throws NodeAlreadyExistsException {
+    DatanodeDetails dn = generateDatanode();
+    addNodeWithState(dn, opState, health);
+  }
+
+  private DatanodeDetails generateDatanode() {
+    return DatanodeDetails.newBuilder().setUuid(UUID.randomUUID()).build();
+  }
+
+}
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelineDatanodesIntersection.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelineDatanodesIntersection.java
index 41eea3d..3f2ed2c 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelineDatanodesIntersection.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelineDatanodesIntersection.java
@@ -23,6 +23,7 @@
 import org.apache.hadoop.hdds.scm.container.MockNodeManager;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
@@ -85,7 +86,7 @@
         stateManager, conf);
 
     int healthyNodeCount = nodeManager
-        .getNodeCount(HddsProtos.NodeState.HEALTHY);
+        .getNodeCount(NodeStatus.inServiceHealthy());
     int intersectionCount = 0;
     int createdPipelineCount = 0;
     while (!end && createdPipelineCount <= healthyNodeCount * nodeHeaviness) {
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelinePlacementPolicy.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelinePlacementPolicy.java
index f024fc5..8a2b9c9 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelinePlacementPolicy.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestPipelinePlacementPolicy.java
@@ -33,6 +33,7 @@
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.container.MockNodeManager;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.net.NetConstants;
 import org.apache.hadoop.hdds.scm.net.NetworkTopology;
 import org.apache.hadoop.hdds.scm.net.NetworkTopologyImpl;
@@ -175,7 +176,7 @@
   @Test
   public void testPickLowestLoadAnchor() throws IOException{
     List<DatanodeDetails> healthyNodes = nodeManager
-        .getNodes(HddsProtos.NodeState.HEALTHY);
+        .getNodes(NodeStatus.inServiceHealthy());
 
     int maxPipelineCount = PIPELINE_LOAD_LIMIT * healthyNodes.size()
         / HddsProtos.ReplicationFactor.THREE.getNumber();
@@ -215,7 +216,7 @@
   @Test
   public void testChooseNodeBasedOnRackAwareness() {
     List<DatanodeDetails> healthyNodes = overWriteLocationInNodes(
-        nodeManager.getNodes(HddsProtos.NodeState.HEALTHY));
+        nodeManager.getNodes(NodeStatus.inServiceHealthy()));
     DatanodeDetails anchor = placementPolicy.chooseNode(healthyNodes);
     NetworkTopology topologyWithDifRacks =
         createNetworkTopologyOnDifRacks();
@@ -231,7 +232,7 @@
   @Test
   public void testFallBackPickNodes() {
     List<DatanodeDetails> healthyNodes = overWriteLocationInNodes(
-        nodeManager.getNodes(HddsProtos.NodeState.HEALTHY));
+        nodeManager.getNodes(NodeStatus.inServiceHealthy()));
     DatanodeDetails node;
     try {
       node = placementPolicy.fallBackPickNodes(healthyNodes, null);
@@ -338,7 +339,7 @@
   @Test
   public void testHeavyNodeShouldBeExcluded() throws SCMException{
     List<DatanodeDetails> healthyNodes =
-        nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+        nodeManager.getNodes(NodeStatus.inServiceHealthy());
     int nodesRequired = HddsProtos.ReplicationFactor.THREE.getNumber();
     // only minority of healthy NODES are heavily engaged in pipelines.
     int minorityHeavy = healthyNodes.size()/2 - 1;
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java
index 9173302..383944d 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineProvider.java
@@ -25,6 +25,7 @@
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.container.MockNodeManager;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.Test;
@@ -146,7 +147,7 @@
       throws Exception {
     init(2);
     List<DatanodeDetails> healthyNodes = nodeManager
-        .getNodes(HddsProtos.NodeState.HEALTHY).stream()
+        .getNodes(NodeStatus.inServiceHealthy()).stream()
         .limit(3).collect(Collectors.toList());
 
     Pipeline pipeline1 = provider.create(
@@ -163,7 +164,7 @@
     int maxPipelinePerNode = 2;
     init(maxPipelinePerNode);
     List<DatanodeDetails> healthyNodes =
-        nodeManager.getNodes(HddsProtos.NodeState.HEALTHY);
+        nodeManager.getNodes(NodeStatus.inServiceHealthy());
 
     Assume.assumeTrue(healthyNodes.size() == 8);
 
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java
index 67aa338..49545b5 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java
@@ -45,6 +45,7 @@
 import org.apache.hadoop.hdds.scm.metadata.PipelineIDCodec;
 import org.apache.hadoop.hdds.scm.metadata.SCMMetadataStore;
 import org.apache.hadoop.hdds.scm.metadata.SCMMetadataStoreImpl;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.safemode.SCMSafeModeManager;
 import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.PipelineReportFromDatanode;
 import org.apache.hadoop.hdds.server.events.EventQueue;
@@ -371,6 +372,10 @@
         metrics);
     Assert.assertEquals(0, numPipelineAllocated);
 
+    // one node pipeline creation will not be accounted for
+    // pipeline limit determination
+    pipelineManager.createPipeline(HddsProtos.ReplicationType.RATIS,
+        HddsProtos.ReplicationFactor.ONE);
     // max limit on no of pipelines is 4
     for (int i = 0; i < pipelinePerDn; i++) {
       Pipeline pipeline = pipelineManager
@@ -382,7 +387,7 @@
     metrics = getMetrics(
         SCMPipelineMetrics.class.getSimpleName());
     numPipelineAllocated = getLongCounter("NumPipelineAllocated", metrics);
-    Assert.assertEquals(4, numPipelineAllocated);
+    Assert.assertEquals(5, numPipelineAllocated);
 
     long numPipelineCreateFailed = getLongCounter(
         "NumPipelineCreationFailed", metrics);
@@ -401,7 +406,7 @@
     metrics = getMetrics(
         SCMPipelineMetrics.class.getSimpleName());
     numPipelineAllocated = getLongCounter("NumPipelineAllocated", metrics);
-    Assert.assertEquals(4, numPipelineAllocated);
+    Assert.assertEquals(5, numPipelineAllocated);
 
     numPipelineCreateFailed = getLongCounter(
         "NumPipelineCreationFailed", metrics);
@@ -788,7 +793,7 @@
         .setState(Pipeline.PipelineState.OPEN)
         .setNodes(
             Arrays.asList(
-                nodeManager.getNodes(HddsProtos.NodeState.HEALTHY).get(0)
+                nodeManager.getNodes(NodeStatus.inServiceHealthy()).get(0)
             )
         )
         .setNodesInOrder(Arrays.asList(0))
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/leader/choose/algorithms/TestLeaderChoosePolicy.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/leader/choose/algorithms/TestLeaderChoosePolicy.java
index 53905e7..8c4df07 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/leader/choose/algorithms/TestLeaderChoosePolicy.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/leader/choose/algorithms/TestLeaderChoosePolicy.java
@@ -54,7 +54,7 @@
         mock(EventPublisher.class));
     Assert.assertSame(
         ratisPipelineProvider.getLeaderChoosePolicy().getClass(),
-        DefaultLeaderChoosePolicy.class);
+        MinLeaderCountChoosePolicy.class);
   }
 
   @Test(expected = RuntimeException.class)
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMContainerMetrics.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMContainerMetrics.java
index 0a2eeef..2a2bcba 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMContainerMetrics.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMContainerMetrics.java
@@ -77,5 +77,7 @@
         "Number of containers in deleting state"), 6);
     verify(mb, times(1)).addGauge(Interns.info("DeletedContainers",
         "Number of containers in deleted state"), 7);
+    verify(mb, times(1)).addGauge(Interns.info("TotalContainers",
+        "Number of all containers"), 27);
   }
 }
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
index a322d41..f577c78 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
@@ -56,6 +56,7 @@
 import org.apache.hadoop.ozone.container.common.volume.HddsVolume;
 import org.apache.hadoop.ozone.container.ozoneimpl.ContainerController;
 import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.container.replication.ReplicationServer.ReplicationConfig;
 import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.PathUtils;
@@ -110,6 +111,7 @@
     config
         .setBoolean(OzoneConfigKeys.DFS_CONTAINER_RATIS_IPC_RANDOM_PORT, true);
     config.set(HddsConfigKeys.HDDS_COMMAND_STATUS_REPORT_INTERVAL, "1s");
+    config.setFromObject(new ReplicationConfig().setPort(0));
   }
 
   @Test
@@ -119,8 +121,8 @@
    */
   public void testGetVersion() throws Exception {
     try (EndpointStateMachine rpcEndPoint =
-             createEndpoint(SCMTestUtils.getConf(),
-                 serverAddress, 1000)) {
+        createEndpoint(SCMTestUtils.getConf(),
+            serverAddress, 1000)) {
       SCMVersionResponseProto responseProto = rpcEndPoint.getEndPoint()
           .getVersion(null);
       Assert.assertNotNull(responseProto);
@@ -138,6 +140,7 @@
    */
   public void testGetVersionTask() throws Exception {
     OzoneConfiguration conf = SCMTestUtils.getConf();
+    conf.setFromObject(new ReplicationConfig().setPort(0));
     try (EndpointStateMachine rpcEndPoint = createEndpoint(conf,
         serverAddress, 1000)) {
       DatanodeDetails datanodeDetails = randomDatanodeDetails();
@@ -165,6 +168,7 @@
         true);
     conf.setBoolean(OzoneConfigKeys.DFS_CONTAINER_RATIS_IPC_RANDOM_PORT,
         true);
+    conf.setFromObject(new ReplicationConfig().setPort(0));
     try (EndpointStateMachine rpcEndPoint = createEndpoint(conf,
         serverAddress, 1000)) {
       GenericTestUtils.LogCapturer logCapturer = GenericTestUtils.LogCapturer
@@ -192,7 +196,7 @@
       rpcEndPoint.setState(EndpointStateMachine.EndPointStates.GETVERSION);
       newState = versionTask.call();
       Assert.assertEquals(EndpointStateMachine.EndPointStates.SHUTDOWN,
-            newState);
+          newState);
       List<HddsVolume> volumesList = ozoneContainer.getVolumeSet()
           .getFailedVolumesList();
       Assert.assertTrue(volumesList.size() == 1);
@@ -209,8 +213,6 @@
     }
   }
 
-
-
   @Test
   /**
    * This test makes a call to end point where there is no SCM server. We
@@ -298,8 +300,10 @@
     return TestUtils.createStorageReport(id, storagePath, 100, 10, 90, null);
   }
 
-  private EndpointStateMachine registerTaskHelper(InetSocketAddress scmAddress,
-      int rpcTimeout, boolean clearDatanodeDetails) throws Exception {
+  private EndpointStateMachine registerTaskHelper(
+      InetSocketAddress scmAddress,
+      int rpcTimeout, boolean clearDatanodeDetails
+  ) throws Exception {
     OzoneConfiguration conf = SCMTestUtils.getConf();
     EndpointStateMachine rpcEndPoint =
         createEndpoint(conf,
@@ -334,7 +338,7 @@
   @Test
   public void testRegisterTask() throws Exception {
     try (EndpointStateMachine rpcEndpoint =
-             registerTaskHelper(serverAddress, 1000, false)) {
+        registerTaskHelper(serverAddress, 1000, false)) {
       // Successful register should move us to Heartbeat state.
       Assert.assertEquals(EndpointStateMachine.EndPointStates.HEARTBEAT,
           rpcEndpoint.getState());
@@ -345,7 +349,7 @@
   public void testRegisterToInvalidEndpoint() throws Exception {
     InetSocketAddress address = SCMTestUtils.getReuseableAddress();
     try (EndpointStateMachine rpcEndpoint =
-             registerTaskHelper(address, 1000, false)) {
+        registerTaskHelper(address, 1000, false)) {
       Assert.assertEquals(EndpointStateMachine.EndPointStates.REGISTER,
           rpcEndpoint.getState());
     }
@@ -355,7 +359,7 @@
   public void testRegisterNoContainerID() throws Exception {
     InetSocketAddress address = SCMTestUtils.getReuseableAddress();
     try (EndpointStateMachine rpcEndpoint =
-             registerTaskHelper(address, 1000, true)) {
+        registerTaskHelper(address, 1000, true)) {
       // No Container ID, therefore we tell the datanode that we would like to
       // shutdown.
       Assert.assertEquals(EndpointStateMachine.EndPointStates.SHUTDOWN,
@@ -379,8 +383,8 @@
   public void testHeartbeat() throws Exception {
     DatanodeDetails dataNode = randomDatanodeDetails();
     try (EndpointStateMachine rpcEndPoint =
-             createEndpoint(SCMTestUtils.getConf(),
-                 serverAddress, 1000)) {
+        createEndpoint(SCMTestUtils.getConf(),
+            serverAddress, 1000)) {
       SCMHeartbeatRequestProto request = SCMHeartbeatRequestProto.newBuilder()
           .setDatanodeDetails(dataNode.getProtoBufMessage())
           .setNodeReport(TestUtils.createNodeReport(
@@ -403,7 +407,6 @@
       // Add some scmCommands for heartbeat response
       addScmCommands();
 
-
       SCMHeartbeatRequestProto request = SCMHeartbeatRequestProto.newBuilder()
           .setDatanodeDetails(dataNode.getProtoBufMessage())
           .setNodeReport(TestUtils.createNodeReport(
@@ -434,17 +437,17 @@
     SCMCommandProto closeCommand = SCMCommandProto.newBuilder()
         .setCloseContainerCommandProto(
             CloseContainerCommandProto.newBuilder().setCmdId(1)
-        .setContainerID(1)
-        .setPipelineID(PipelineID.randomId().getProtobuf())
-        .build())
+                .setContainerID(1)
+                .setPipelineID(PipelineID.randomId().getProtobuf())
+                .build())
         .setCommandType(Type.closeContainerCommand)
         .build();
     SCMCommandProto replicationCommand = SCMCommandProto.newBuilder()
         .setReplicateContainerCommandProto(
             ReplicateContainerCommandProto.newBuilder()
-        .setCmdId(2)
-        .setContainerID(2)
-        .build())
+                .setCmdId(2)
+                .setContainerID(2)
+                .build())
         .setCommandType(Type.replicateContainerCommand)
         .build();
     SCMCommandProto deleteBlockCommand = SCMCommandProto.newBuilder()
@@ -465,8 +468,10 @@
     scmServerImpl.addScmCommandRequest(replicationCommand);
   }
 
-  private StateContext heartbeatTaskHelper(InetSocketAddress scmAddress,
-      int rpcTimeout) throws Exception {
+  private StateContext heartbeatTaskHelper(
+      InetSocketAddress scmAddress,
+      int rpcTimeout
+  ) throws Exception {
     OzoneConfiguration conf = SCMTestUtils.getConf();
     conf.set(DFS_DATANODE_DATA_DIR_KEY, testDir.getAbsolutePath());
     conf.set(OZONE_METADATA_DIRS, testDir.getAbsolutePath());
@@ -475,11 +480,10 @@
     // hard coding once we fix the Ratis default behaviour.
     conf.setBoolean(OzoneConfigKeys.DFS_CONTAINER_RATIS_IPC_RANDOM_PORT, true);
 
-
     // Create a datanode state machine for stateConext used by endpoint task
     try (DatanodeStateMachine stateMachine = new DatanodeStateMachine(
         randomDatanodeDetails(), conf, null, null);
-         EndpointStateMachine rpcEndPoint =
+        EndpointStateMachine rpcEndPoint =
             createEndpoint(conf, scmAddress, rpcTimeout)) {
       HddsProtos.DatanodeDetailsProto datanodeDetailsProto =
           randomDatanodeDetails().getProtoBufMessage();
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java
index a0cf957..1656376 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/placement/TestContainerPlacement.java
@@ -27,10 +27,9 @@
 import org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRandom;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.ozone.OzoneConsts;
-
 import org.apache.commons.math3.stat.descriptive.DescriptiveStatistics;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
 import org.junit.Assert;
 import static org.junit.Assert.assertEquals;
 import org.junit.Test;
@@ -42,7 +41,8 @@
 
   private DescriptiveStatistics computeStatistics(NodeManager nodeManager) {
     DescriptiveStatistics descriptiveStatistics = new DescriptiveStatistics();
-    for (DatanodeDetails dd : nodeManager.getNodes(HEALTHY)) {
+    for (DatanodeDetails dd :
+        nodeManager.getNodes(NodeStatus.inServiceHealthy())) {
       float weightedValue =
           nodeManager.getNodeStat(dd).get().getScmUsed().get() / (float)
               nodeManager.getNodeStat(dd).get().getCapacity().get();
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
index b917309..3a211de 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
@@ -18,11 +18,13 @@
 
 import com.google.common.base.Preconditions;
 
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos.LayoutVersionProto;
 import org.apache.hadoop.hdds.protocol.proto
         .StorageContainerDatanodeProtocolProtos.PipelineReportsProto;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.net.NetworkTopology;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
@@ -54,17 +56,17 @@
  * A Node Manager to test replication.
  */
 public class ReplicationNodeManagerMock implements NodeManager {
-  private final Map<DatanodeDetails, NodeState> nodeStateMap;
+  private final Map<DatanodeDetails, NodeStatus> nodeStateMap;
   private final CommandQueue commandQueue;
 
   /**
    * A list of Datanodes and current states.
-   * @param nodeState A node state map.
+   * @param nodeStatus A node state map.
    */
-  public ReplicationNodeManagerMock(Map<DatanodeDetails, NodeState> nodeState,
+  public ReplicationNodeManagerMock(Map<DatanodeDetails, NodeStatus> nodeStatus,
                                     CommandQueue commandQueue) {
-    Preconditions.checkNotNull(nodeState);
-    this.nodeStateMap = nodeState;
+    Preconditions.checkNotNull(nodeStatus);
+    this.nodeStateMap = nodeStatus;
     this.commandQueue = commandQueue;
   }
 
@@ -74,7 +76,7 @@
    * @return A state to number of nodes that in this state mapping
    */
   @Override
-  public Map<String, Integer> getNodeCount() {
+  public Map<String, Map<String, Integer>> getNodeCount() {
     return null;
   }
 
@@ -86,22 +88,48 @@
   /**
    * Gets all Live Datanodes that is currently communicating with SCM.
    *
-   * @param nodestate - State of the node
+   * @param nodestatus - State of the node
    * @return List of Datanodes that are Heartbeating SCM.
    */
   @Override
-  public List<DatanodeDetails> getNodes(NodeState nodestate) {
+  public List<DatanodeDetails> getNodes(NodeStatus nodestatus) {
+    return null;
+  }
+
+  /**
+   * Gets all Live Datanodes that is currently communicating with SCM.
+   *
+   * @param opState - Operational state of the node
+   * @param health - Health of the node
+   * @return List of Datanodes that are Heartbeating SCM.
+   */
+  @Override
+  public List<DatanodeDetails> getNodes(
+      HddsProtos.NodeOperationalState opState, NodeState health) {
     return null;
   }
 
   /**
    * Returns the Number of Datanodes that are communicating with SCM.
    *
-   * @param nodestate - State of the node
+   * @param nodestatus - State of the node
    * @return int -- count
    */
   @Override
-  public int getNodeCount(NodeState nodestate) {
+  public int getNodeCount(NodeStatus nodestatus) {
+    return 0;
+  }
+
+  /**
+   * Returns the Number of Datanodes that are communicating with SCM.
+   *
+   * @param opState - Operational state of the node
+   * @param health - Health of the node
+   * @return int -- count
+   */
+  @Override
+  public int getNodeCount(
+      HddsProtos.NodeOperationalState opState, NodeState health) {
     return 0;
   }
 
@@ -155,11 +183,40 @@
    * @return Healthy/Stale/Dead.
    */
   @Override
-  public NodeState getNodeState(DatanodeDetails dd) {
+  public NodeStatus getNodeStatus(DatanodeDetails dd) {
     return nodeStateMap.get(dd);
   }
 
   /**
+   * Set the operation state of a node.
+   * @param dd The datanode to set the new state for
+   * @param newState The new operational state for the node
+   */
+  @Override
+  public void setNodeOperationalState(DatanodeDetails dd,
+      HddsProtos.NodeOperationalState newState) throws NodeNotFoundException {
+    setNodeOperationalState(dd, newState, 0);
+  }
+
+  /**
+   * Set the operation state of a node.
+   * @param dd The datanode to set the new state for
+   * @param newState The new operational state for the node
+   */
+  @Override
+  public void setNodeOperationalState(DatanodeDetails dd,
+      HddsProtos.NodeOperationalState newState, long opStateExpiryEpocSec)
+      throws NodeNotFoundException {
+    NodeStatus currentStatus = nodeStateMap.get(dd);
+    if (currentStatus != null) {
+      nodeStateMap.put(dd, new NodeStatus(newState, currentStatus.getHealth(),
+          opStateExpiryEpocSec));
+    } else {
+      throw new NodeNotFoundException();
+    }
+  }
+
+  /**
    * Get set of pipelines a datanode is part of.
    * @param dnId - datanodeID
    * @return Set of PipelineID
@@ -302,10 +359,10 @@
    * Adds a node to the existing Node manager. This is used only for test
    * purposes.
    * @param id DatanodeDetails
-   * @param state State you want to put that node to.
+   * @param status State you want to put that node to.
    */
-  public void addNode(DatanodeDetails id, NodeState state) {
-    nodeStateMap.put(id, state);
+  public void addNode(DatanodeDetails id, NodeStatus status) {
+    nodeStateMap.put(id, status);
   }
 
   @Override
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/scm/node/TestSCMNodeMetrics.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/scm/node/TestSCMNodeMetrics.java
index 83b4bb0..e372fe6 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/scm/node/TestSCMNodeMetrics.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/scm/node/TestSCMNodeMetrics.java
@@ -197,17 +197,62 @@
 
     MetricsRecordBuilder metricsSource = getMetrics(SCMNodeMetrics.SOURCE_NAME);
 
-    assertGauge("HealthyReadOnlyNodes", 1, metricsSource);
-    assertGauge("StaleNodes", 0, metricsSource);
-    assertGauge("DeadNodes", 0, metricsSource);
-    assertGauge("DecommissioningNodes", 0, metricsSource);
-    assertGauge("DecommissionedNodes", 0, metricsSource);
-    assertGauge("DiskCapacity", 100L, metricsSource);
-    assertGauge("DiskUsed", 10L, metricsSource);
-    assertGauge("DiskRemaining", 90L, metricsSource);
-    assertGauge("SSDCapacity", 0L, metricsSource);
-    assertGauge("SSDUsed", 0L, metricsSource);
-    assertGauge("SSDRemaining", 0L, metricsSource);
+    assertGauge("InServiceHealthyNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("InServiceHealthyReadonlyNodes", 1,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("InServiceStaleNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("InServiceDeadNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissioningHealthyNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissioningStaleNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissioningDeadNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissionedHealthyNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissionedStaleNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissionedDeadNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("EnteringMaintenanceHealthyNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("EnteringMaintenanceStaleNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("EnteringMaintenanceDeadNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("InMaintenanceHealthyNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("InMaintenanceStaleNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("InMaintenanceDeadNodes", 0,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("MaintenanceDiskCapacity", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("MaintenanceDiskUsed", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("MaintenanceDiskRemaining", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("MaintenanceSSDCapacity", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("MaintenanceSSDUsed", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("MaintenanceSSDRemaining", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissionedDiskCapacity", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissionedDiskUsed", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissionedDiskRemaining", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissionedSSDCapacity", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissionedSSDUsed", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
+    assertGauge("DecommissionedSSDRemaining", 0L,
+        getMetrics(SCMNodeMetrics.class.getSimpleName()));
 
     LayoutVersionManager versionManager = nodeManager.getLayoutVersionManager();
     LayoutVersionProto layoutInfo = LayoutVersionProto.newBuilder()
@@ -217,8 +262,8 @@
     nodeManager.processHeartbeat(registeredDatanode, layoutInfo);
     sleep(4000);
     metricsSource = getMetrics(SCMNodeMetrics.SOURCE_NAME);
-    assertGauge("HealthyReadOnlyNodes", 0, metricsSource);
-    assertGauge("HealthyNodes", 1, metricsSource);
+    assertGauge("InServiceHealthyReadonlyNodes", 0, metricsSource);
+    assertGauge("InServiceHealthyNodes", 1, metricsSource);
 
   }
 
diff --git a/hadoop-hdds/tools/pom.xml b/hadoop-hdds/tools/pom.xml
index dfda5a6..9da6f93 100644
--- a/hadoop-hdds/tools/pom.xml
+++ b/hadoop-hdds/tools/pom.xml
@@ -78,6 +78,12 @@
       <groupId>org.xerial</groupId>
       <artifactId>sqlite-jdbc</artifactId>
     </dependency>
+      <dependency>
+          <groupId>org.mockito</groupId>
+          <artifactId>mockito-all</artifactId>
+          <version>${mockito1-hadoop.version}</version>
+          <scope>test</scope>
+      </dependency>
 
   </dependencies>
 </project>
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ContainerOperationClient.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ContainerOperationClient.java
index 4560548..739ea10 100644
--- a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ContainerOperationClient.java
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ContainerOperationClient.java
@@ -260,18 +260,39 @@
   /**
    * Returns a set of Nodes that meet a query criteria.
    *
-   * @param nodeStatuses - Criteria that we want the node to have.
-   * @param queryScope   - Query scope - Cluster or pool.
-   * @param poolName     - if it is pool, a pool name is required.
+   * @param opState - The operational state we want the node to have
+   *                eg IN_SERVICE, DECOMMISSIONED, etc
+   * @param nodeState - The health we want the node to have, eg HEALTHY, STALE,
+   *                  etc
+   * @param queryScope - Query scope - Cluster or pool.
+   * @param poolName - if it is pool, a pool name is required.
    * @return A set of nodes that meet the requested criteria.
    * @throws IOException
    */
   @Override
-  public List<HddsProtos.Node> queryNode(HddsProtos.NodeState
-      nodeStatuses, HddsProtos.QueryScope queryScope, String poolName)
+  public List<HddsProtos.Node> queryNode(
+      HddsProtos.NodeOperationalState opState,
+      HddsProtos.NodeState nodeState,
+      HddsProtos.QueryScope queryScope, String poolName)
       throws IOException {
-    return storageContainerLocationClient.queryNode(nodeStatuses, queryScope,
-        poolName);
+    return storageContainerLocationClient.queryNode(opState, nodeState,
+        queryScope, poolName);
+  }
+
+  @Override
+  public void decommissionNodes(List<String> hosts) throws IOException {
+    storageContainerLocationClient.decommissionNodes(hosts);
+  }
+
+  @Override
+  public void recommissionNodes(List<String> hosts) throws IOException {
+    storageContainerLocationClient.recommissionNodes(hosts);
+  }
+
+  @Override
+  public void startMaintenanceNodes(List<String> hosts, int endHours)
+      throws IOException {
+    storageContainerLocationClient.startMaintenanceNodes(hosts, endHours);
   }
 
   /**
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SafeModeWaitSubcommand.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SafeModeWaitSubcommand.java
index e3fb5c1..49a40a2 100644
--- a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SafeModeWaitSubcommand.java
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/SafeModeWaitSubcommand.java
@@ -59,21 +59,25 @@
 
     while (getRemainingTimeInSec() > 0) {
       try (ScmClient scmClient = scmOption.createScmClient()) {
-        while (getRemainingTimeInSec() > 0) {
-
-          boolean isSafeModeActive = scmClient.inSafeMode();
-
-          if (!isSafeModeActive) {
+        long remainingTime;
+        do {
+          if (!scmClient.inSafeMode()) {
             LOG.info("SCM is out of safe mode.");
             return null;
-          } else {
+          }
+
+          remainingTime = getRemainingTimeInSec();
+
+          if (remainingTime > 0) {
             LOG.info(
                 "SCM is in safe mode. Will retry in 1 sec. Remaining time "
                     + "(sec): {}",
-                getRemainingTimeInSec());
+                remainingTime);
             Thread.sleep(1000);
+          } else {
+            LOG.info("SCM is in safe mode. No more retries.");
           }
-        }
+        } while (remainingTime > 0);
       } catch (Exception ex) {
         LOG.info(
             "SCM is not available (yet?). Error is {}. Will retry in 1 sec. "
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ScmOption.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ScmOption.java
index 5b8b814..076a28a 100644
--- a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ScmOption.java
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ScmOption.java
@@ -22,6 +22,7 @@
 import org.apache.hadoop.hdds.cli.GenericParentCommand;
 import org.apache.hadoop.hdds.conf.MutableConfigurationSource;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.client.ScmClient;
 import picocli.CommandLine;
@@ -29,6 +30,7 @@
 import java.io.IOException;
 
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY;
+import static org.apache.hadoop.hdds.utils.HddsServerUtil.getScmSecurityClient;
 import static picocli.CommandLine.Spec.Target.MIXEE;
 
 /**
@@ -69,4 +71,15 @@
     }
   }
 
+  public SCMSecurityProtocol createScmSecurityClient() {
+    try {
+      GenericParentCommand parent = (GenericParentCommand)
+          spec.root().userObject();
+      return getScmSecurityClient(parent.createOzoneConfiguration());
+    } catch (IOException ex) {
+      throw new IllegalArgumentException(
+          "Can't create SCM Security client", ex);
+    }
+  }
+
 }
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
index c1aebae..bb442e0 100644
--- a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
@@ -34,8 +34,6 @@
 import org.apache.hadoop.hdds.scm.client.ScmClient;
 
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONED;
-import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DECOMMISSIONING;
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
 
@@ -60,8 +58,6 @@
     STATES.add(HEALTHY);
     STATES.add(STALE);
     STATES.add(DEAD);
-    STATES.add(DECOMMISSIONING);
-    STATES.add(DECOMMISSIONED);
   }
 
   @CommandLine.Option(names = {"-o", "--order"},
@@ -73,16 +69,11 @@
   private boolean fullInfo;
 
   @Override
-  public Class<?> getParentType() {
-    return OzoneAdmin.class;
-  }
-
-  @Override
-  protected void execute(ScmClient scmClient) throws IOException {
+  public void execute(ScmClient scmClient) throws IOException {
     for (HddsProtos.NodeState state : STATES) {
-      List<HddsProtos.Node> nodes = scmClient.queryNode(state,
+      List<HddsProtos.Node> nodes = scmClient.queryNode(null, state,
           HddsProtos.QueryScope.CLUSTER, "");
-      if (nodes != null && !nodes.isEmpty()) {
+      if (nodes != null && nodes.size() > 0) {
         // show node state
         System.out.println("State = " + state.toString());
         if (order) {
@@ -94,27 +85,36 @@
     }
   }
 
+  public Class<?> getParentType() {
+    return OzoneAdmin.class;
+  }
+
   // Format
   // Location: rack1
-  //  ipAddress(hostName)
+  //  ipAddress(hostName) OperationalState
   private void printOrderedByLocation(List<HddsProtos.Node> nodes) {
     HashMap<String, TreeSet<DatanodeDetails>> tree =
         new HashMap<>();
+    HashMap<DatanodeDetails, HddsProtos.NodeOperationalState> state =
+        new HashMap<>();
+
     for (HddsProtos.Node node : nodes) {
       String location = node.getNodeID().getNetworkLocation();
       if (location != null && !tree.containsKey(location)) {
         tree.put(location, new TreeSet<>());
       }
-      tree.get(location).add(DatanodeDetails.getFromProtoBuf(node.getNodeID()));
+      DatanodeDetails dn = DatanodeDetails.getFromProtoBuf(node.getNodeID());
+      tree.get(location).add(dn);
+      state.put(dn, node.getNodeOperationalStates(0));
     }
     ArrayList<String> locations = new ArrayList<>(tree.keySet());
     Collections.sort(locations);
 
     locations.forEach(location -> {
       System.out.println("Location: " + location);
-      tree.get(location).forEach(node -> {
-        System.out.println(" " + node.getIpAddress() + "(" + node.getHostName()
-            + ")");
+      tree.get(location).forEach(n -> {
+        System.out.println(" " + n.getIpAddress() + "(" + n.getHostName()
+            + ") "+state.get(n));
       });
     });
   }
@@ -135,16 +135,18 @@
     return fullInfo ? node.getNodeID().getUuid() + "/" : "";
   }
 
-  // Format "ipAddress(hostName):PortName1=PortValue1    networkLocation"
+  // Format "ipAddress(hostName):PortName1=PortValue1    OperationalState
+  //     networkLocation
   private void printNodesWithLocation(Collection<HddsProtos.Node> nodes) {
     nodes.forEach(node -> {
       System.out.print(" " + getAdditionNodeOutput(node) +
           node.getNodeID().getIpAddress() + "(" +
           node.getNodeID().getHostName() + ")" +
           ":" + formatPortOutput(node.getNodeID().getPortsList()));
-      System.out.println("    " +
+      System.out.println("    "
+          + node.getNodeOperationalStates(0) + "    " +
           (node.getNodeID().getNetworkLocation() != null ?
               node.getNodeID().getNetworkLocation() : "NA"));
     });
   }
-}
+}
\ No newline at end of file
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/CertCommands.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/CertCommands.java
new file mode 100644
index 0000000..7d897ff
--- /dev/null
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/CertCommands.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.cert;
+
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.GenericCli;
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.cli.OzoneAdmin;
+import org.apache.hadoop.hdds.cli.SubcommandWithParent;
+
+import org.kohsuke.MetaInfServices;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Model.CommandSpec;
+import picocli.CommandLine.Spec;
+
+/**
+ * Sub command for certificate related operations.
+ */
+@Command(
+    name = "cert",
+    description = "Certificate related operations",
+    mixinStandardHelpOptions = true,
+    versionProvider = HddsVersionProvider.class,
+    subcommands = {
+      InfoSubcommand.class,
+      ListSubcommand.class,
+    })
+
+@MetaInfServices(SubcommandWithParent.class)
+public class CertCommands implements Callable<Void>, SubcommandWithParent {
+
+  @Spec
+  private CommandSpec spec;
+
+  @Override
+  public Void call() throws Exception {
+    GenericCli.missingSubcommand(spec);
+    return null;
+  }
+
+  @Override
+  public Class<?> getParentType() {
+    return OzoneAdmin.class;
+  }
+}
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/InfoSubcommand.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/InfoSubcommand.java
new file mode 100644
index 0000000..8ed139d
--- /dev/null
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/InfoSubcommand.java
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.cert;
+
+import java.io.IOException;
+import java.security.cert.CertificateException;
+import java.security.cert.X509Certificate;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
+
+import org.apache.hadoop.hdds.security.x509.certificate.utils.CertificateCodec;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Model.CommandSpec;
+import picocli.CommandLine.Parameters;
+import picocli.CommandLine.Spec;
+
+/**
+ * This is the handler that process certificate info command.
+ */
+@Command(
+    name = "info",
+    description = "Show detailed information for a specific certificate",
+    mixinStandardHelpOptions = true,
+    versionProvider = HddsVersionProvider.class)
+
+class InfoSubcommand extends ScmCertSubcommand {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(InfoSubcommand.class);
+
+  @Spec
+  private CommandSpec spec;
+
+  @Parameters(description = "Serial id of the certificate in decimal.")
+  private String serialId;
+
+  @Override
+  public void execute(SCMSecurityProtocol client) throws IOException {
+    final String certPemStr =
+        client.getCertificate(serialId);
+    Preconditions.checkNotNull(certPemStr,
+        "Certificate can't be found");
+
+    // Print container report info.
+    LOG.info("Certificate id: {}", serialId);
+    try {
+      X509Certificate cert = CertificateCodec.getX509Cert(certPemStr);
+      LOG.info(cert.toString());
+    } catch (CertificateException ex) {
+      LOG.error("Failed to get certificate id " + serialId);
+      throw new IOException("Fail to get certificate id " + serialId, ex);
+    }
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/ListSubcommand.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/ListSubcommand.java
new file mode 100644
index 0000000..0ac5f9f
--- /dev/null
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/ListSubcommand.java
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.cert;
+
+import java.io.IOException;
+import java.security.cert.CertificateException;
+import java.security.cert.X509Certificate;
+import java.util.List;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.security.x509.certificate.utils.CertificateCodec;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Help.Visibility;
+import picocli.CommandLine.Option;
+
+/**
+ * This is the handler that process certificate list command.
+ */
+@Command(
+    name = "list",
+    description = "List certificates",
+    mixinStandardHelpOptions = true,
+    versionProvider = HddsVersionProvider.class)
+public class ListSubcommand extends ScmCertSubcommand {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(ListSubcommand.class);
+
+  @Option(names = {"-s", "--start"},
+      description = "Certificate serial id to start the iteration",
+      defaultValue = "0", showDefaultValue = Visibility.ALWAYS)
+  private long startSerialId;
+
+  @Option(names = {"-c", "--count"},
+      description = "Maximum number of certificates to list",
+      defaultValue = "20", showDefaultValue = Visibility.ALWAYS)
+  private int count;
+
+  @Option(names = {"-r", "--role"},
+      description = "Filter certificate by the role: om/datanode",
+      defaultValue = "datanode", showDefaultValue = Visibility.ALWAYS)
+  private String role;
+
+  @Option(names = {"-t", "--type"},
+      description = "Filter certificate by the type: valid or revoked",
+      defaultValue = "valid", showDefaultValue = Visibility.ALWAYS)
+  private String type;
+  private static final String OUTPUT_FORMAT = "%-17s %-30s %-30s %-110s";
+
+  private HddsProtos.NodeType parseCertRole(String r) {
+    if (r.equalsIgnoreCase("om")) {
+      return HddsProtos.NodeType.OM;
+    } else if (r.equalsIgnoreCase("scm")) {
+      return HddsProtos.NodeType.SCM;
+    } else {
+      return HddsProtos.NodeType.DATANODE;
+    }
+  }
+
+  private void printCert(X509Certificate cert) {
+    LOG.info(String.format(OUTPUT_FORMAT, cert.getSerialNumber(),
+        cert.getNotBefore(), cert.getNotAfter(), cert.getSubjectDN()));
+  }
+
+  @Override
+  protected void execute(SCMSecurityProtocol client) throws IOException {
+    boolean isRevoked = type.equalsIgnoreCase("revoked");
+    List<String> certPemList = client.listCertificate(
+        parseCertRole(role), startSerialId, count, isRevoked);
+    LOG.info("Total {} {} certificates: ", certPemList.size(), type);
+    LOG.info(String.format(OUTPUT_FORMAT, "SerialNumber", "Valid From",
+        "Expiry", "Subject"));
+    for (String certPemStr : certPemList) {
+      try {
+        X509Certificate cert = CertificateCodec.getX509Certificate(certPemStr);
+        printCert(cert);
+      } catch (CertificateException ex) {
+        LOG.error("Failed to parse certificate.");
+      }
+    }
+  }
+}
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/ScmCertSubcommand.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/ScmCertSubcommand.java
new file mode 100644
index 0000000..98bd76a
--- /dev/null
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/ScmCertSubcommand.java
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.cert;
+
+import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
+import org.apache.hadoop.hdds.scm.cli.ScmOption;
+import picocli.CommandLine;
+
+import java.io.IOException;
+import java.util.concurrent.Callable;
+
+/**
+ * Base class for admin commands that connect via SCM security client.
+ */
+public abstract class ScmCertSubcommand implements Callable<Void> {
+
+  @CommandLine.Mixin
+  private ScmOption scmOption;
+
+  protected abstract void execute(SCMSecurityProtocol client)
+      throws IOException;
+
+  @Override
+  public final Void call() throws Exception {
+    SCMSecurityProtocol client = scmOption.createScmSecurityClient();
+    execute(client);
+    return null;
+  }
+}
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/package-info.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/package-info.java
new file mode 100644
index 0000000..3541194
--- /dev/null
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/cert/package-info.java
@@ -0,0 +1,22 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * Contains all of the SCM CA certificate related commands.
+ */
+package org.apache.hadoop.hdds.scm.cli.cert;
\ No newline at end of file
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DatanodeCommands.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DatanodeCommands.java
index 7e77c60..4f8d4d1 100644
--- a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DatanodeCommands.java
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DatanodeCommands.java
@@ -38,7 +38,10 @@
     mixinStandardHelpOptions = true,
     versionProvider = HddsVersionProvider.class,
     subcommands = {
-        ListInfoSubcommand.class
+        ListInfoSubcommand.class,
+        DecommissionSubCommand.class,
+        MaintenanceSubCommand.class,
+        RecommissionSubCommand.class
     })
 @MetaInfServices(SubcommandWithParent.class)
 public class DatanodeCommands implements Callable<Void>, SubcommandWithParent {
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DecommissionSubCommand.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DecommissionSubCommand.java
new file mode 100644
index 0000000..a4e7e3f
--- /dev/null
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/DecommissionSubCommand.java
@@ -0,0 +1,46 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.datanode;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.scm.cli.ScmSubcommand;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Decommission one or more datanodes.
+ */
+@Command(
+    name = "decommission",
+    description = "Decommission a datanode",
+    mixinStandardHelpOptions = true,
+    versionProvider = HddsVersionProvider.class)
+public class DecommissionSubCommand extends ScmSubcommand {
+
+  @CommandLine.Parameters(description = "List of fully qualified host names")
+  private List<String> hosts = new ArrayList<String>();
+
+  @Override
+  public void execute(ScmClient scmClient) throws IOException {
+    scmClient.decommissionNodes(hosts);
+  }
+}
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/ListInfoSubcommand.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/ListInfoSubcommand.java
index 80c5eca..38ad390 100644
--- a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/ListInfoSubcommand.java
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/ListInfoSubcommand.java
@@ -60,30 +60,34 @@
     if (Strings.isNullOrEmpty(ipaddress) && Strings.isNullOrEmpty(uuid)) {
       getAllNodes(scmClient).forEach(this::printDatanodeInfo);
     } else {
-      Stream<DatanodeDetails> allNodes = getAllNodes(scmClient).stream();
+      Stream<DatanodeWithAttributes> allNodes = getAllNodes(scmClient).stream();
       if (!Strings.isNullOrEmpty(ipaddress)) {
-        allNodes = allNodes.filter(p -> p.getIpAddress()
+        allNodes = allNodes.filter(p -> p.getDatanodeDetails().getIpAddress()
             .compareToIgnoreCase(ipaddress) == 0);
       }
       if (!Strings.isNullOrEmpty(uuid)) {
-        allNodes = allNodes.filter(p -> p.getUuid().toString().equals(uuid));
+        allNodes = allNodes.filter(p ->
+            p.getDatanodeDetails().toString().equals(uuid));
       }
       allNodes.forEach(this::printDatanodeInfo);
     }
   }
 
-  private List<DatanodeDetails> getAllNodes(ScmClient scmClient)
+  private List<DatanodeWithAttributes> getAllNodes(ScmClient scmClient)
       throws IOException {
-    List<HddsProtos.Node> nodes = scmClient.queryNode(
+    List<HddsProtos.Node> nodes = scmClient.queryNode(null,
         HddsProtos.NodeState.HEALTHY, HddsProtos.QueryScope.CLUSTER, "");
 
     return nodes.stream()
-        .map(p -> DatanodeDetails.getFromProtoBuf(p.getNodeID()))
+        .map(p -> new DatanodeWithAttributes(
+            DatanodeDetails.getFromProtoBuf(p.getNodeID()),
+            p.getNodeOperationalStates(0), p.getNodeStates(0)))
         .collect(Collectors.toList());
   }
 
-  private void printDatanodeInfo(DatanodeDetails datanode) {
+  private void printDatanodeInfo(DatanodeWithAttributes dna) {
     StringBuilder pipelineListInfo = new StringBuilder();
+    DatanodeDetails datanode = dna.getDatanodeDetails();
     int relatedPipelineNum = 0;
     if (!pipelines.isEmpty()) {
       List<Pipeline> relatedPipelines = pipelines.stream().filter(
@@ -108,6 +112,34 @@
     System.out.println("Datanode: " + datanode.getUuid().toString() +
         " (" + datanode.getNetworkLocation() + "/" + datanode.getIpAddress()
         + "/" + datanode.getHostName() + "/" + relatedPipelineNum +
-        " pipelines) \n" + "Related pipelines: \n" + pipelineListInfo);
+        " pipelines)");
+    System.out.println("Operational State: " + dna.getOpState());
+    System.out.println("Related pipelines: \n" + pipelineListInfo);
+  }
+
+  private static class DatanodeWithAttributes {
+    private DatanodeDetails datanodeDetails;
+    private HddsProtos.NodeOperationalState operationalState;
+    private HddsProtos.NodeState healthState;
+
+    DatanodeWithAttributes(DatanodeDetails dn,
+        HddsProtos.NodeOperationalState opState,
+        HddsProtos.NodeState healthState) {
+      this.datanodeDetails = dn;
+      this.operationalState = opState;
+      this.healthState = healthState;
+    }
+
+    public DatanodeDetails getDatanodeDetails() {
+      return datanodeDetails;
+    }
+
+    public HddsProtos.NodeOperationalState getOpState() {
+      return operationalState;
+    }
+
+    public HddsProtos.NodeState getHealthState() {
+      return healthState;
+    }
   }
 }
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/MaintenanceSubCommand.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/MaintenanceSubCommand.java
new file mode 100644
index 0000000..fa1d802
--- /dev/null
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/MaintenanceSubCommand.java
@@ -0,0 +1,51 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.datanode;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.scm.cli.ScmSubcommand;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Place one or more datanodes into Maintenance Mode.
+ */
+@Command(
+    name = "maintenance",
+    description = "Put a datanode into Maintenance Mode",
+    mixinStandardHelpOptions = true,
+    versionProvider = HddsVersionProvider.class)
+public class MaintenanceSubCommand extends ScmSubcommand {
+
+  @CommandLine.Parameters(description = "List of fully qualified host names")
+  private List<String> hosts = new ArrayList<String>();
+
+  @CommandLine.Option(names = {"--end"},
+      description = "Automatically end maintenance after the given hours. "+
+          "By default, maintenance must be ended manually.")
+  private int endInHours = 0;
+
+  @Override
+  public void execute(ScmClient scmClient) throws IOException {
+    scmClient.startMaintenanceNodes(hosts, endInHours);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/RecommissionSubCommand.java b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/RecommissionSubCommand.java
new file mode 100644
index 0000000..b6e2f3d
--- /dev/null
+++ b/hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/datanode/RecommissionSubCommand.java
@@ -0,0 +1,46 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.datanode;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.scm.cli.ScmSubcommand;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Place decommissioned or maintenance nodes back into service.
+ */
+@Command(
+    name = "recommission",
+    description = "Return a datanode to service",
+    mixinStandardHelpOptions = true,
+    versionProvider = HddsVersionProvider.class)
+public class RecommissionSubCommand extends ScmSubcommand {
+
+  @CommandLine.Parameters(description = "List of fully qualified host names")
+  private List<String> hosts = new ArrayList<String>();
+
+  @Override
+  public void execute(ScmClient scmClient) throws IOException {
+    scmClient.recommissionNodes(hosts);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-hdds/tools/src/test/java/org/apache/hadoop/hdds/scm/cli/datanode/TestListInfoSubcommand.java b/hadoop-hdds/tools/src/test/java/org/apache/hadoop/hdds/scm/cli/datanode/TestListInfoSubcommand.java
new file mode 100644
index 0000000..45d4d7b
--- /dev/null
+++ b/hadoop-hdds/tools/src/test/java/org/apache/hadoop/hdds/scm/cli/datanode/TestListInfoSubcommand.java
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.cli.datanode;
+
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.client.ScmClient;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.ByteArrayOutputStream;
+import java.io.PrintStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import org.mockito.Mockito;
+
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.any;
+
+/**
+ * Unit tests to validate the the TestListInfoSubCommand class includes the
+ * correct output when executed against a mock client.
+ */
+public class TestListInfoSubcommand {
+
+  private ListInfoSubcommand cmd;
+  private final ByteArrayOutputStream outContent = new ByteArrayOutputStream();
+  private final ByteArrayOutputStream errContent = new ByteArrayOutputStream();
+  private final PrintStream originalOut = System.out;
+  private final PrintStream originalErr = System.err;
+
+  @Before
+  public void setup() {
+    cmd = new ListInfoSubcommand();
+    System.setOut(new PrintStream(outContent));
+    System.setErr(new PrintStream(errContent));
+  }
+
+  @After
+  public void tearDown() {
+    System.setOut(originalOut);
+    System.setErr(originalErr);
+  }
+
+  @Test
+  public void testDataNodeOperationalStateIncludedInOutput() throws Exception {
+    ScmClient scmClient = mock(ScmClient.class);
+    Mockito.when(scmClient.queryNode(any(HddsProtos.NodeOperationalState.class),
+        any(HddsProtos.NodeState.class), any(HddsProtos.QueryScope.class),
+        Mockito.anyString()))
+        .thenAnswer(invocation -> getNodeDetails());
+    Mockito.when(scmClient.listPipelines())
+        .thenReturn(new ArrayList<>());
+
+    cmd.execute(scmClient);
+
+    // The output should contain a string like:
+    // <other lines>
+    // Operational State: <STATE>
+    // <other lines>
+    Pattern p = Pattern.compile(
+        "^Operational State:\\s+IN_SERVICE$", Pattern.MULTILINE);
+    Matcher m = p.matcher(outContent.toString());
+    assertTrue(m.find());
+    // Should also have a node with the state DECOMMISSIONING
+    p = Pattern.compile(
+        "^Operational State:\\s+DECOMMISSIONING$", Pattern.MULTILINE);
+    m = p.matcher(outContent.toString());
+    assertTrue(m.find());
+  }
+
+  private List<HddsProtos.Node> getNodeDetails() {
+    List<HddsProtos.Node> nodes = new ArrayList<>();
+
+    for (int i=0; i<2; i++) {
+      HddsProtos.DatanodeDetailsProto.Builder dnd =
+          HddsProtos.DatanodeDetailsProto.newBuilder();
+      dnd.setHostName("host" + i);
+      dnd.setIpAddress("1.2.3." + i+1);
+      dnd.setNetworkLocation("/default");
+      dnd.setNetworkName("host" + i);
+      dnd.addPorts(HddsProtos.Port.newBuilder()
+          .setName("ratis").setValue(5678).build());
+      dnd.setUuid(UUID.randomUUID().toString());
+
+      HddsProtos.Node.Builder builder  = HddsProtos.Node.newBuilder();
+      if (i == 0) {
+        builder.addNodeOperationalStates(
+            HddsProtos.NodeOperationalState.IN_SERVICE);
+      } else {
+        builder.addNodeOperationalStates(
+            HddsProtos.NodeOperationalState.DECOMMISSIONING);
+      }
+      builder.addNodeStates(HddsProtos.NodeState.HEALTHY);
+      builder.setNodeID(dnd.build());
+      nodes.add(builder.build());
+    }
+    return nodes;
+  }
+}
\ No newline at end of file
diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/BucketArgs.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/BucketArgs.java
index 29613cc..ccd4081 100644
--- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/BucketArgs.java
+++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/BucketArgs.java
@@ -58,7 +58,7 @@
   private final String sourceBucket;
 
   private long quotaInBytes;
-  private long quotaInCounts;
+  private long quotaInNamespace;
 
   /**
    * Private constructor, constructed via builder.
@@ -70,13 +70,13 @@
    * @param sourceVolume
    * @param sourceBucket
    * @param quotaInBytes Bucket quota in bytes.
-   * @param quotaInCounts Bucket quota in counts.
+   * @param quotaInNamespace Bucket quota in counts.
    */
   @SuppressWarnings("parameternumber")
   private BucketArgs(Boolean versioning, StorageType storageType,
       List<OzoneAcl> acls, Map<String, String> metadata,
       String bucketEncryptionKey, String sourceVolume, String sourceBucket,
-      long quotaInBytes, long quotaInCounts) {
+      long quotaInBytes, long quotaInNamespace) {
     this.acls = acls;
     this.versioning = versioning;
     this.storageType = storageType;
@@ -85,7 +85,7 @@
     this.sourceVolume = sourceVolume;
     this.sourceBucket = sourceBucket;
     this.quotaInBytes = quotaInBytes;
-    this.quotaInCounts = quotaInCounts;
+    this.quotaInNamespace = quotaInNamespace;
   }
 
   /**
@@ -156,10 +156,10 @@
 
   /**
    * Returns Bucket Quota in key counts.
-   * @return quotaInCounts.
+   * @return quotaInNamespace.
    */
-  public long getQuotaInCounts() {
-    return quotaInCounts;
+  public long getQuotaInNamespace() {
+    return quotaInNamespace;
   }
 
   /**
@@ -174,7 +174,7 @@
     private String sourceVolume;
     private String sourceBucket;
     private long quotaInBytes;
-    private long quotaInCounts;
+    private long quotaInNamespace;
 
     public Builder() {
       metadata = new HashMap<>();
@@ -220,8 +220,8 @@
       return this;
     }
 
-    public BucketArgs.Builder setQuotaInCounts(long quota) {
-      quotaInCounts = quota;
+    public BucketArgs.Builder setQuotaInNamespace(long quota) {
+      quotaInNamespace = quota;
       return this;
     }
 
@@ -233,7 +233,7 @@
     public BucketArgs build() {
       return new BucketArgs(versioning, storageType, acls, metadata,
           bucketEncryptionKey, sourceVolume, sourceBucket, quotaInBytes,
-          quotaInCounts);
+          quotaInNamespace);
     }
   }
 }
diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
index 20a1271..f688a66 100644
--- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
+++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
@@ -101,6 +101,11 @@
   private long usedBytes;
 
   /**
+   * Used namespace of the bucket.
+   */
+  private long usedNamespace;
+
+  /**
    * Creation time of the bucket.
    */
   private Instant creationTime;
@@ -127,7 +132,7 @@
   /**
    * Quota of key count allocated for the bucket.
    */
-  private long quotaInCounts;
+  private long quotaInNamespace;
 
   private OzoneBucket(ConfigurationSource conf, String volumeName,
       String bucketName, ReplicationFactor defaultReplication,
@@ -197,13 +202,14 @@
       Boolean versioning, long creationTime, long modificationTime,
       Map<String, String> metadata, String encryptionKeyName,
       String sourceVolume, String sourceBucket, long usedBytes,
-      long quotaInBytes, long quotaInCounts) {
+      long usedNamespace, long quotaInBytes, long quotaInNamespace) {
     this(conf, proxy, volumeName, bucketName, storageType, versioning,
         creationTime, metadata, encryptionKeyName, sourceVolume, sourceBucket);
     this.usedBytes = usedBytes;
+    this.usedNamespace = usedNamespace;
     this.modificationTime = Instant.ofEpochMilli(modificationTime);
     this.quotaInBytes = quotaInBytes;
-    this.quotaInCounts = quotaInCounts;
+    this.quotaInNamespace = quotaInNamespace;
   }
 
   /**
@@ -365,10 +371,10 @@
   /**
    * Returns quota of key counts allocated for the Bucket.
    *
-   * @return quotaInCounts
+   * @return quotaInNamespace
    */
-  public long getQuotaInCounts() {
-    return quotaInCounts;
+  public long getQuotaInNamespace() {
+    return quotaInNamespace;
   }
 
   /**
@@ -421,23 +427,23 @@
    */
   public void clearSpaceQuota() throws IOException {
     OzoneBucket ozoneBucket = proxy.getBucketDetails(volumeName, name);
-    proxy.setBucketQuota(volumeName, name, ozoneBucket.getQuotaInCounts(),
+    proxy.setBucketQuota(volumeName, name, ozoneBucket.getQuotaInNamespace(),
         QUOTA_RESET);
     quotaInBytes = QUOTA_RESET;
-    quotaInCounts = ozoneBucket.getQuotaInCounts();
+    quotaInNamespace = ozoneBucket.getQuotaInNamespace();
   }
 
   /**
-   * Clean the count quota of the bucket.
+   * Clean the namespace quota of the bucket.
    *
    * @throws IOException
    */
-  public void clearCountQuota() throws IOException {
+  public void clearNamespaceQuota() throws IOException {
     OzoneBucket ozoneBucket = proxy.getBucketDetails(volumeName, name);
     proxy.setBucketQuota(volumeName, name, QUOTA_RESET,
         ozoneBucket.getQuotaInBytes());
     quotaInBytes = ozoneBucket.getQuotaInBytes();
-    quotaInCounts = QUOTA_RESET;
+    quotaInNamespace = QUOTA_RESET;
   }
 
   /**
@@ -447,10 +453,10 @@
    * @throws IOException
    */
   public void setQuota(OzoneQuota quota) throws IOException {
-    proxy.setBucketQuota(volumeName, name, quota.getQuotaInCounts(),
+    proxy.setBucketQuota(volumeName, name, quota.getQuotaInNamespace(),
         quota.getQuotaInBytes());
     quotaInBytes = quota.getQuotaInBytes();
-    quotaInCounts = quota.getQuotaInCounts();
+    quotaInNamespace = quota.getQuotaInNamespace();
   }
 
   /**
@@ -509,6 +515,10 @@
     return usedBytes;
   }
 
+  public long getUsedNamespace() {
+    return usedNamespace;
+  }
+
   /**
    * Returns Iterator to iterate over all keys in the bucket.
    * The result can be restricted using key prefix, will return all
@@ -517,7 +527,8 @@
    * @param keyPrefix Bucket prefix to match
    * @return {@code Iterator<OzoneKey>}
    */
-  public Iterator<? extends OzoneKey> listKeys(String keyPrefix) {
+  public Iterator<? extends OzoneKey> listKeys(String keyPrefix)
+      throws IOException{
     return listKeys(keyPrefix, null);
   }
 
@@ -532,7 +543,7 @@
    * @return {@code Iterator<OzoneKey>}
    */
   public Iterator<? extends OzoneKey> listKeys(String keyPrefix,
-      String prevKey) {
+      String prevKey) throws IOException {
     return new KeyIterator(keyPrefix, prevKey);
   }
 
@@ -760,7 +771,6 @@
   private class KeyIterator implements Iterator<OzoneKey> {
 
     private String keyPrefix = null;
-
     private Iterator<OzoneKey> currentIterator;
     private OzoneKey currentValue;
 
@@ -771,7 +781,7 @@
      * The returned keys match key prefix.
      * @param keyPrefix
      */
-    KeyIterator(String keyPrefix, String prevKey) {
+    KeyIterator(String keyPrefix, String prevKey) throws IOException{
       this.keyPrefix = keyPrefix;
       this.currentValue = null;
       this.currentIterator = getNextListOfKeys(prevKey).iterator();
@@ -780,7 +790,12 @@
     @Override
     public boolean hasNext() {
       if(!currentIterator.hasNext() && currentValue != null) {
-        currentIterator = getNextListOfKeys(currentValue.getName()).iterator();
+        try {
+          currentIterator =
+              getNextListOfKeys(currentValue.getName()).iterator();
+        } catch (IOException e) {
+          throw new RuntimeException(e);
+        }
       }
       return currentIterator.hasNext();
     }
@@ -799,13 +814,10 @@
      * @param prevKey
      * @return {@code List<OzoneKey>}
      */
-    private List<OzoneKey> getNextListOfKeys(String prevKey) {
-      try {
-        return proxy.listKeys(volumeName, name, keyPrefix, prevKey,
-            listCacheSize);
-      } catch (IOException e) {
-        throw new RuntimeException(e);
-      }
+    private List<OzoneKey> getNextListOfKeys(String prevKey) throws
+        IOException {
+      return proxy.listKeys(volumeName, name, keyPrefix, prevKey,
+          listCacheSize);
     }
   }
 }
diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java
index b54692a..369500f 100644
--- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java
+++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java
@@ -69,7 +69,11 @@
   /**
    * Quota of bucket count allocated for the Volume.
    */
-  private long quotaInCounts;
+  private long quotaInNamespace;
+  /**
+   * Bucket namespace quota usage.
+   */
+  private long usedNamespace;
   /**
    * Creation time of the volume.
    */
@@ -100,7 +104,7 @@
   @SuppressWarnings("parameternumber")
   public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy,
       String name, String admin, String owner, long quotaInBytes,
-      long quotaInCounts, long creationTime, List<OzoneAcl> acls,
+      long quotaInNamespace, long creationTime, List<OzoneAcl> acls,
       Map<String, String> metadata) {
     Preconditions.checkNotNull(proxy, "Client proxy is not set.");
     this.proxy = proxy;
@@ -108,7 +112,7 @@
     this.admin = admin;
     this.owner = owner;
     this.quotaInBytes = quotaInBytes;
-    this.quotaInCounts = quotaInCounts;
+    this.quotaInNamespace = quotaInNamespace;
     this.creationTime = Instant.ofEpochMilli(creationTime);
     this.acls = acls;
     this.listCacheSize = HddsClientUtils.getListCacheSize(conf);
@@ -126,18 +130,20 @@
   @SuppressWarnings("parameternumber")
   public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy,
       String name, String admin, String owner, long quotaInBytes,
-      long quotaInCounts, long creationTime, long modificationTime,
-      List<OzoneAcl> acls, Map<String, String> metadata) {
-    this(conf, proxy, name, admin, owner, quotaInBytes, quotaInCounts,
+      long quotaInNamespace, long usedNamespace, long creationTime,
+      long modificationTime, List<OzoneAcl> acls,
+      Map<String, String> metadata) {
+    this(conf, proxy, name, admin, owner, quotaInBytes, quotaInNamespace,
         creationTime, acls, metadata);
     this.modificationTime = Instant.ofEpochMilli(modificationTime);
+    this.usedNamespace = usedNamespace;
   }
 
   @SuppressWarnings("parameternumber")
   public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy,
       String name, String admin, String owner, long quotaInBytes,
-      long quotaInCounts, long creationTime, List<OzoneAcl> acls) {
-    this(conf, proxy, name, admin, owner, quotaInBytes, quotaInCounts,
+      long quotaInNamespace, long creationTime, List<OzoneAcl> acls) {
+    this(conf, proxy, name, admin, owner, quotaInBytes, quotaInNamespace,
         creationTime, acls, new HashMap<>());
     modificationTime = Instant.now();
     if (modificationTime.isBefore(this.creationTime)) {
@@ -149,23 +155,24 @@
   @SuppressWarnings("parameternumber")
   public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy,
       String name, String admin, String owner, long quotaInBytes,
-      long quotaInCounts, long creationTime, long modificationTime,
-      List<OzoneAcl> acls) {
-    this(conf, proxy, name, admin, owner, quotaInBytes, quotaInCounts,
+      long quotaInNamespace, long usedNamespace, long creationTime,
+      long modificationTime, List<OzoneAcl> acls) {
+    this(conf, proxy, name, admin, owner, quotaInBytes, quotaInNamespace,
         creationTime, acls);
     this.modificationTime = Instant.ofEpochMilli(modificationTime);
+    this.usedNamespace = usedNamespace;
   }
 
   @VisibleForTesting
   protected OzoneVolume(String name, String admin, String owner,
-      long quotaInBytes, long quotaInCounts, long creationTime,
+      long quotaInBytes, long quotaInNamespace, long creationTime,
       List<OzoneAcl> acls) {
     this.proxy = null;
     this.name = name;
     this.admin = admin;
     this.owner = owner;
     this.quotaInBytes = quotaInBytes;
-    this.quotaInCounts = quotaInCounts;
+    this.quotaInNamespace = quotaInNamespace;
     this.creationTime = Instant.ofEpochMilli(creationTime);
     this.acls = acls;
     this.metadata = new HashMap<>();
@@ -179,9 +186,10 @@
   @SuppressWarnings("parameternumber")
   @VisibleForTesting
   protected OzoneVolume(String name, String admin, String owner,
-      long quotaInBytes, long quotaInCounts, long creationTime,
+      long quotaInBytes, long quotaInNamespace, long creationTime,
       long modificationTime, List<OzoneAcl> acls) {
-    this(name, admin, owner, quotaInBytes, quotaInCounts, creationTime, acls);
+    this(name, admin, owner, quotaInBytes, quotaInNamespace, creationTime,
+        acls);
     this.modificationTime = Instant.ofEpochMilli(modificationTime);
   }
 
@@ -224,10 +232,10 @@
   /**
    * Returns quota of bucket counts allocated for the Volume.
    *
-   * @return quotaInCounts
+   * @return quotaInNamespace
    */
-  public long getQuotaInCounts() {
-    return quotaInCounts;
+  public long getQuotaInNamespace() {
+    return quotaInNamespace;
   }
   /**
    * Returns creation time of the volume.
@@ -257,6 +265,14 @@
   }
 
   /**
+   * Returns used bucket namespace.
+   * @return usedNamespace
+   */
+  public long getUsedNamespace() {
+    return usedNamespace;
+  }
+
+  /**
    * Sets/Changes the owner of this Volume.
    * @param userName new owner
    * @throws IOException
@@ -274,21 +290,21 @@
    */
   public void clearSpaceQuota() throws IOException {
     OzoneVolume ozoneVolume = proxy.getVolumeDetails(name);
-    proxy.setVolumeQuota(name, ozoneVolume.getQuotaInCounts(), QUOTA_RESET);
+    proxy.setVolumeQuota(name, ozoneVolume.getQuotaInNamespace(), QUOTA_RESET);
     this.quotaInBytes = QUOTA_RESET;
-    this.quotaInCounts = ozoneVolume.getQuotaInCounts();
+    this.quotaInNamespace = ozoneVolume.getQuotaInNamespace();
   }
 
   /**
-   * Clean the count quota of the volume.
+   * Clean the namespace quota of the volume.
    *
    * @throws IOException
    */
-  public void clearCountQuota() throws IOException {
+  public void clearNamespaceQuota() throws IOException {
     OzoneVolume ozoneVolume = proxy.getVolumeDetails(name);
     proxy.setVolumeQuota(name, QUOTA_RESET, ozoneVolume.getQuotaInBytes());
     this.quotaInBytes = ozoneVolume.getQuotaInBytes();
-    this.quotaInCounts = QUOTA_RESET;
+    this.quotaInNamespace = QUOTA_RESET;
   }
 
   /**
@@ -298,10 +314,10 @@
    * @throws IOException
    */
   public void setQuota(OzoneQuota quota) throws IOException {
-    proxy.setVolumeQuota(name, quota.getQuotaInCounts(),
+    proxy.setVolumeQuota(name, quota.getQuotaInNamespace(),
         quota.getQuotaInBytes());
     this.quotaInBytes = quota.getQuotaInBytes();
-    this.quotaInCounts = quota.getQuotaInCounts();
+    this.quotaInNamespace = quota.getQuotaInNamespace();
   }
 
   /**
diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/VolumeArgs.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/VolumeArgs.java
index 110a21c..12ade52 100644
--- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/VolumeArgs.java
+++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/VolumeArgs.java
@@ -34,7 +34,7 @@
   private final String admin;
   private final String owner;
   private final long quotaInBytes;
-  private final long quotaInCounts;
+  private final long quotaInNamespace;
   private final List<OzoneAcl> acls;
   private Map<String, String> metadata;
 
@@ -43,20 +43,20 @@
    * @param admin Administrator's name.
    * @param owner Volume owner's name
    * @param quotaInBytes Volume quota in bytes.
-   * @param quotaInCounts Volume quota in counts.
+   * @param quotaInNamespace Volume quota in counts.
    * @param acls User to access rights map.
    * @param metadata Metadata of volume.
    */
   private VolumeArgs(String admin,
       String owner,
       long quotaInBytes,
-      long quotaInCounts,
+      long quotaInNamespace,
       List<OzoneAcl> acls,
       Map<String, String> metadata) {
     this.admin = admin;
     this.owner = owner;
     this.quotaInBytes = quotaInBytes;
-    this.quotaInCounts = quotaInCounts;
+    this.quotaInNamespace = quotaInNamespace;
     this.acls = acls;
     this.metadata = metadata;
   }
@@ -87,10 +87,10 @@
 
   /**
    * Returns Volume Quota in bucket counts.
-   * @return quotaInCounts.
+   * @return quotaInNamespace.
    */
-  public long getQuotaInCounts() {
-    return quotaInCounts;
+  public long getQuotaInNamespace() {
+    return quotaInNamespace;
   }
 
   /**
@@ -121,7 +121,7 @@
     private String adminName;
     private String ownerName;
     private long quotaInBytes;
-    private long quotaInCounts;
+    private long quotaInNamespace;
     private List<OzoneAcl> listOfAcls;
     private Map<String, String> metadata = new HashMap<>();
 
@@ -141,8 +141,8 @@
       return this;
     }
 
-    public VolumeArgs.Builder setQuotaInCounts(long quota) {
-      this.quotaInCounts = quota;
+    public VolumeArgs.Builder setQuotaInNamespace(long quota) {
+      this.quotaInNamespace = quota;
       return this;
     }
 
@@ -162,7 +162,7 @@
      */
     public VolumeArgs build() {
       return new VolumeArgs(adminName, ownerName, quotaInBytes,
-          quotaInCounts, listOfAcls, metadata);
+          quotaInNamespace, listOfAcls, metadata);
     }
   }
 
diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
index f8f6cd3..a837fea 100644
--- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
+++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
@@ -26,6 +26,7 @@
 import java.util.function.Function;
 import java.util.stream.Collectors;
 
+import org.apache.hadoop.fs.CanUnbuffer;
 import org.apache.hadoop.fs.FSExceptionMessages;
 import org.apache.hadoop.fs.Seekable;
 import org.apache.hadoop.hdds.client.BlockID;
@@ -43,7 +44,8 @@
 /**
  * Maintaining a list of BlockInputStream. Read based on offset.
  */
-public class KeyInputStream extends InputStream implements Seekable {
+public class KeyInputStream extends InputStream
+    implements Seekable, CanUnbuffer {
 
   private static final Logger LOG =
       LoggerFactory.getLogger(KeyInputStream.class);
@@ -333,4 +335,11 @@
     seek(getPos() + toSkip);
     return toSkip;
   }
+
+  @Override
+  public void unbuffer() {
+    for (BlockInputStream is : blockStreams) {
+      is.unbuffer();
+    }
+  }
 }
diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/OzoneInputStream.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/OzoneInputStream.java
index 14b2866..f01975c 100644
--- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/OzoneInputStream.java
+++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/OzoneInputStream.java
@@ -17,6 +17,8 @@
 
 package org.apache.hadoop.ozone.client.io;
 
+import org.apache.hadoop.fs.CanUnbuffer;
+
 import java.io.IOException;
 import java.io.InputStream;
 
@@ -24,7 +26,7 @@
  * OzoneInputStream is used to read data from Ozone.
  * It uses {@link KeyInputStream} for reading the data.
  */
-public class OzoneInputStream extends InputStream {
+public class OzoneInputStream extends InputStream implements CanUnbuffer {
 
   private final InputStream inputStream;
 
@@ -65,4 +67,11 @@
   public InputStream getInputStream() {
     return inputStream;
   }
+
+  @Override
+  public void unbuffer() {
+    if (inputStream instanceof CanUnbuffer) {
+      ((CanUnbuffer) inputStream).unbuffer();
+    }
+  }
 }
diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
index dbd47c7..863a109 100644
--- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
+++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
@@ -100,12 +100,12 @@
   /**
    * Set Volume Quota.
    * @param volumeName Name of the Volume
-   * @param quotaInCounts The maximum number of buckets in this volume.
+   * @param quotaInNamespace The maximum number of buckets in this volume.
    * @param quotaInBytes The maximum size this volume can be used.
    * @throws IOException
    */
-  void setVolumeQuota(String volumeName, long quotaInCounts, long quotaInBytes)
-      throws IOException;
+  void setVolumeQuota(String volumeName, long quotaInNamespace,
+      long quotaInBytes) throws IOException;
 
   /**
    * Returns {@link OzoneVolume}.
@@ -661,9 +661,9 @@
    * @param volumeName Name of the Volume.
    * @param bucketName Name of the Bucket.
    * @param quotaInBytes The maximum size this buckets can be used.
-   * @param quotaInCounts The maximum number of keys in this bucket.
+   * @param quotaInNamespace The maximum number of keys in this bucket.
    * @throws IOException
    */
-  void setBucketQuota(String volumeName, String bucketName, long quotaInCounts,
-      long quotaInBytes) throws IOException;
+  void setBucketQuota(String volumeName, String bucketName,
+      long quotaInNamespace, long quotaInBytes) throws IOException;
 }
diff --git a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
index 532a3f3..28f9a01 100644
--- a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
+++ b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
@@ -231,14 +231,14 @@
       throws IOException {
     verifyVolumeName(volumeName);
     Preconditions.checkNotNull(volArgs);
-    verifyCountsQuota(volArgs.getQuotaInCounts());
+    verifyCountsQuota(volArgs.getQuotaInNamespace());
     verifySpaceQuota(volArgs.getQuotaInBytes());
 
     String admin = volArgs.getAdmin() == null ?
         ugi.getUserName() : volArgs.getAdmin();
     String owner = volArgs.getOwner() == null ?
         ugi.getUserName() : volArgs.getOwner();
-    long quotaInCounts = getQuotaValue(volArgs.getQuotaInCounts());
+    long quotaInNamespace = getQuotaValue(volArgs.getQuotaInNamespace());
     long quotaInBytes = getQuotaValue(volArgs.getQuotaInBytes());
     List<OzoneAcl> listOfAcls = new ArrayList<>();
     //User ACL
@@ -259,7 +259,8 @@
     builder.setAdminName(admin);
     builder.setOwnerName(owner);
     builder.setQuotaInBytes(quotaInBytes);
-    builder.setQuotaInCounts(quotaInCounts);
+    builder.setQuotaInNamespace(quotaInNamespace);
+    builder.setUsedNamespace(0L);
     builder.addAllMetadata(volArgs.getMetadata());
 
     //Remove duplicates and add ACLs
@@ -273,7 +274,7 @@
     } else {
       LOG.info("Creating Volume: {}, with {} as owner "
               + "and space quota set to {} bytes, counts quota set" +
-              " to {}", volumeName, owner, quotaInBytes, quotaInCounts);
+              " to {}", volumeName, owner, quotaInBytes, quotaInNamespace);
     }
     ozoneManagerClient.createVolume(builder.build());
   }
@@ -287,12 +288,12 @@
   }
 
   @Override
-  public void setVolumeQuota(String volumeName, long quotaInCounts,
+  public void setVolumeQuota(String volumeName, long quotaInNamespace,
       long quotaInBytes) throws IOException {
     HddsClientUtils.verifyResourceName(volumeName);
-    verifyCountsQuota(quotaInCounts);
+    verifyCountsQuota(quotaInNamespace);
     verifySpaceQuota(quotaInBytes);
-    ozoneManagerClient.setQuota(volumeName, quotaInCounts, quotaInBytes);
+    ozoneManagerClient.setQuota(volumeName, quotaInNamespace, quotaInBytes);
   }
 
   @Override
@@ -307,7 +308,8 @@
         volume.getAdminName(),
         volume.getOwnerName(),
         volume.getQuotaInBytes(),
-        volume.getQuotaInCounts(),
+        volume.getQuotaInNamespace(),
+        volume.getUsedNamespace(),
         volume.getCreationTime(),
         volume.getModificationTime(),
         volume.getAclMap().ozoneAclGetProtobuf().stream().
@@ -341,7 +343,8 @@
         volume.getAdminName(),
         volume.getOwnerName(),
         volume.getQuotaInBytes(),
-        volume.getQuotaInCounts(),
+        volume.getQuotaInNamespace(),
+        volume.getUsedNamespace(),
         volume.getCreationTime(),
         volume.getModificationTime(),
         volume.getAclMap().ozoneAclGetProtobuf().stream().
@@ -363,7 +366,8 @@
         volume.getAdminName(),
         volume.getOwnerName(),
         volume.getQuotaInBytes(),
-        volume.getQuotaInCounts(),
+        volume.getQuotaInNamespace(),
+        volume.getUsedNamespace(),
         volume.getCreationTime(),
         volume.getModificationTime(),
         volume.getAclMap().ozoneAclGetProtobuf().stream().
@@ -387,7 +391,7 @@
     verifyVolumeName(volumeName);
     verifyBucketName(bucketName);
     Preconditions.checkNotNull(bucketArgs);
-    verifyCountsQuota(bucketArgs.getQuotaInCounts());
+    verifyCountsQuota(bucketArgs.getQuotaInNamespace());
     verifySpaceQuota(bucketArgs.getQuotaInBytes());
 
     Boolean isVersionEnabled = bucketArgs.getVersioning() == null ?
@@ -415,7 +419,7 @@
         .setSourceVolume(bucketArgs.getSourceVolume())
         .setSourceBucket(bucketArgs.getSourceBucket())
         .setQuotaInBytes(getQuotaValue(bucketArgs.getQuotaInBytes()))
-        .setQuotaInCounts(getQuotaValue(bucketArgs.getQuotaInCounts()))
+        .setQuotaInNamespace(getQuotaValue(bucketArgs.getQuotaInNamespace()))
         .setAcls(listOfAcls.stream().distinct().collect(Collectors.toList()));
 
     if (bek != null) {
@@ -574,16 +578,16 @@
 
   @Override
   public void setBucketQuota(String volumeName, String bucketName,
-      long quotaInCounts, long quotaInBytes) throws IOException {
+      long quotaInNamespace, long quotaInBytes) throws IOException {
     HddsClientUtils.verifyResourceName(bucketName);
     HddsClientUtils.verifyResourceName(volumeName);
-    verifyCountsQuota(quotaInCounts);
+    verifyCountsQuota(quotaInNamespace);
     verifySpaceQuota(quotaInBytes);
     OmBucketArgs.Builder builder = OmBucketArgs.newBuilder();
     builder.setVolumeName(volumeName)
         .setBucketName(bucketName)
         .setQuotaInBytes(quotaInBytes)
-        .setQuotaInCounts(quotaInCounts);
+        .setQuotaInNamespace(quotaInNamespace);
     ozoneManagerClient.setBucketProperty(builder.build());
 
   }
@@ -624,8 +628,9 @@
         bucketInfo.getSourceVolume(),
         bucketInfo.getSourceBucket(),
         bucketInfo.getUsedBytes(),
+        bucketInfo.getUsedNamespace(),
         bucketInfo.getQuotaInBytes(),
-        bucketInfo.getQuotaInCounts()
+        bucketInfo.getQuotaInNamespace()
     );
   }
 
@@ -651,8 +656,9 @@
         bucket.getSourceVolume(),
         bucket.getSourceBucket(),
         bucket.getUsedBytes(),
+        bucket.getUsedNamespace(),
         bucket.getQuotaInBytes(),
-        bucket.getQuotaInCounts()))
+        bucket.getQuotaInNamespace()))
         .collect(Collectors.toList());
   }
 
diff --git a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OFSPath.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OFSPath.java
similarity index 97%
rename from hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OFSPath.java
rename to hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OFSPath.java
index f602833..ef4e09f 100644
--- a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OFSPath.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OFSPath.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.hadoop.fs.ozone;
+package org.apache.hadoop.ozone;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
@@ -66,7 +66,7 @@
   private static final String OFS_MOUNT_NAME_TMP = "tmp";
   // Hard-code the volume name to tmp for the first implementation
   @VisibleForTesting
-  static final String OFS_MOUNT_TMP_VOLUMENAME = "tmp";
+  public static final String OFS_MOUNT_TMP_VOLUMENAME = "tmp";
 
   public OFSPath(Path path) {
     initOFSPath(path.toUri());
@@ -283,7 +283,8 @@
    * @return Username MD5 hash in hex digits.
    * @throws IOException When UserGroupInformation.getCurrentUser() fails.
    */
-  static String getTempMountBucketNameOfCurrentUser() throws IOException {
+  public static String getTempMountBucketNameOfCurrentUser()
+      throws IOException {
     String username = UserGroupInformation.getCurrentUser().getUserName();
     return getTempMountBucketName(username);
   }
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/conf/OMClientConfig.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/conf/OMClientConfig.java
index 37cd67e..cbae089 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/conf/OMClientConfig.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/conf/OMClientConfig.java
@@ -36,6 +36,8 @@
 public class OMClientConfig {
 
   public static final String OM_CLIENT_RPC_TIME_OUT = "rpc.timeout";
+  public static final String OM_TRASH_EMPTIER_CORE_POOL_SIZE
+      = "trash.core.pool.size";
 
   @Config(key = OM_CLIENT_RPC_TIME_OUT,
       defaultValue = "15m",
@@ -51,6 +53,21 @@
   )
   private long rpcTimeOut = 15 * 60 * 1000;
 
+  @Config(key = OM_TRASH_EMPTIER_CORE_POOL_SIZE,
+      defaultValue = "5",
+      type = ConfigType.INT,
+      tags = {OZONE, OM, CLIENT},
+      description = "Total number of threads in pool for the Trash Emptier")
+  private int trashEmptierPoolSize = 5;
+
+
+  public int getTrashEmptierPoolSize() {
+    return trashEmptierPoolSize;
+  }
+
+  public void setTrashEmptierPoolSize(int trashEmptierPoolSize) {
+    this.trashEmptierPoolSize = trashEmptierPoolSize;
+  }
 
   public long getRpcTimeOut() {
     return rpcTimeOut;
@@ -64,4 +81,4 @@
     }
     this.rpcTimeOut = timeOut;
   }
-}
+}
\ No newline at end of file
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
index fba6dcc..d4bff41 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
@@ -85,7 +85,7 @@
 
   public static final String OZONE_KEY_DELETING_LIMIT_PER_TASK =
       "ozone.key.deleting.limit.per.task";
-  public static final int OZONE_KEY_DELETING_LIMIT_PER_TASK_DEFAULT = 1000;
+  public static final int OZONE_KEY_DELETING_LIMIT_PER_TASK_DEFAULT = 20000;
 
   public static final String OZONE_OM_METRICS_SAVE_INTERVAL =
       "ozone.om.save.metrics.interval";
@@ -97,7 +97,7 @@
   public static final String OZONE_OM_RATIS_ENABLE_KEY
       = "ozone.om.ratis.enable";
   public static final boolean OZONE_OM_RATIS_ENABLE_DEFAULT
-      = false;
+      = true;
   public static final String OZONE_OM_RATIS_PORT_KEY
       = "ozone.om.ratis.port";
   public static final int OZONE_OM_RATIS_PORT_DEFAULT
@@ -168,13 +168,6 @@
       OZONE_OM_RATIS_SERVER_FAILURE_TIMEOUT_DURATION_DEFAULT
       = TimeDuration.valueOf(120, TimeUnit.SECONDS);
 
-  // OM Leader server role check interval
-  public static final String OZONE_OM_RATIS_SERVER_ROLE_CHECK_INTERVAL_KEY
-      = "ozone.om.ratis.server.role.check.interval";
-  public static final TimeDuration
-      OZONE_OM_RATIS_SERVER_ROLE_CHECK_INTERVAL_DEFAULT
-      = TimeDuration.valueOf(15, TimeUnit.SECONDS);
-
   // OM SnapshotProvider configurations
   public static final String OZONE_OM_RATIS_SNAPSHOT_DIR =
       "ozone.om.ratis.snapshot.dir";
@@ -250,4 +243,5 @@
   public static final long OZONE_OM_MAX_TIME_TO_WAIT_FLUSH_TXNS =
       TimeUnit.MINUTES.toSeconds(5);
   public static final long OZONE_OM_FLUSH_TXNS_RETRY_INTERVAL_SECONDS = 5L;
+  public static final String OZONE_OM_HA_PREFIX = "ozone.om.ha";
 }
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
index 2c4d89d..30ac702 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
@@ -238,6 +238,8 @@
     UPDATE_LAYOUT_VERSION_FAILED,
     LAYOUT_FEATURE_FINALIZATION_FAILED,
     PREPARE_FAILED,
-    NOT_SUPPORTED_OPERATION_WHEN_PREPARED
+    NOT_SUPPORTED_OPERATION_WHEN_PREPARED,
+    QUOTA_ERROR
+
   }
 }
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
index 0a3f9b8..3302c34 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
@@ -50,7 +50,7 @@
   private StorageType storageType;
 
   private long quotaInBytes;
-  private long quotaInCounts;
+  private long quotaInNamespace;
 
   /**
    * Private constructor, constructed via builder.
@@ -59,18 +59,18 @@
    * @param isVersionEnabled - Bucket version flag.
    * @param storageType - Storage type to be used.
    * @param quotaInBytes Volume quota in bytes.
-   * @param quotaInCounts Volume quota in counts.
+   * @param quotaInNamespace Volume quota in counts.
    */
   private OmBucketArgs(String volumeName, String bucketName,
       Boolean isVersionEnabled, StorageType storageType,
-      Map<String, String> metadata, long quotaInBytes, long quotaInCounts) {
+      Map<String, String> metadata, long quotaInBytes, long quotaInNamespace) {
     this.volumeName = volumeName;
     this.bucketName = bucketName;
     this.isVersionEnabled = isVersionEnabled;
     this.storageType = storageType;
     this.metadata = metadata;
     this.quotaInBytes = quotaInBytes;
-    this.quotaInCounts = quotaInCounts;
+    this.quotaInNamespace = quotaInNamespace;
   }
 
   /**
@@ -115,10 +115,10 @@
 
   /**
    * Returns Bucket Quota in key counts.
-   * @return quotaInCounts.
+   * @return quotaInNamespace.
    */
-  public long getQuotaInCounts() {
-    return quotaInCounts;
+  public long getQuotaInNamespace() {
+    return quotaInNamespace;
   }
 
   /**
@@ -154,7 +154,7 @@
     private StorageType storageType;
     private Map<String, String> metadata;
     private long quotaInBytes;
-    private long quotaInCounts;
+    private long quotaInNamespace;
 
     public Builder setVolumeName(String volume) {
       this.volumeName = volume;
@@ -186,8 +186,8 @@
       return this;
     }
 
-    public Builder setQuotaInCounts(long quota) {
-      quotaInCounts = quota;
+    public Builder setQuotaInNamespace(long quota) {
+      quotaInNamespace = quota;
       return this;
     }
 
@@ -199,7 +199,7 @@
       Preconditions.checkNotNull(volumeName);
       Preconditions.checkNotNull(bucketName);
       return new OmBucketArgs(volumeName, bucketName, isVersionEnabled,
-          storageType, metadata, quotaInBytes, quotaInCounts);
+          storageType, metadata, quotaInBytes, quotaInNamespace);
     }
   }
 
@@ -219,8 +219,8 @@
     if(quotaInBytes > 0 || quotaInBytes == OzoneConsts.QUOTA_RESET) {
       builder.setQuotaInBytes(quotaInBytes);
     }
-    if(quotaInCounts > 0 || quotaInCounts == OzoneConsts.QUOTA_RESET) {
-      builder.setQuotaInCounts(quotaInCounts);
+    if(quotaInNamespace > 0 || quotaInNamespace == OzoneConsts.QUOTA_RESET) {
+      builder.setQuotaInNamespace(quotaInNamespace);
     }
     return builder.build();
   }
@@ -239,6 +239,6 @@
             bucketArgs.getStorageType()) : null,
         KeyValueUtil.getFromProtobuf(bucketArgs.getMetadataList()),
         bucketArgs.getQuotaInBytes(),
-        bucketArgs.getQuotaInCounts());
+        bucketArgs.getQuotaInNamespace());
   }
 }
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
index a23bbfc..ca4bdb0 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
@@ -81,8 +81,10 @@
 
   private long usedBytes;
 
+  private long usedNamespace;
+
   private long quotaInBytes;
-  private long quotaInCounts;
+  private long quotaInNamespace;
 
   /**
    * Private constructor, constructed via builder.
@@ -99,7 +101,7 @@
    * @param sourceBucket - source bucket for bucket links, null otherwise
    * @param usedBytes - Bucket Quota Usage in bytes.
    * @param quotaInBytes Bucket quota in bytes.
-   * @param quotaInCounts Bucket quota in counts.
+   * @param quotaInNamespace Bucket quota in counts.
    */
   @SuppressWarnings("checkstyle:ParameterNumber")
   private OmBucketInfo(String volumeName,
@@ -116,8 +118,9 @@
       String sourceVolume,
       String sourceBucket,
       long usedBytes,
+      long usedNamespace,
       long quotaInBytes,
-      long quotaInCounts) {
+      long quotaInNamespace) {
     this.volumeName = volumeName;
     this.bucketName = bucketName;
     this.acls = acls;
@@ -132,8 +135,9 @@
     this.sourceVolume = sourceVolume;
     this.sourceBucket = sourceBucket;
     this.usedBytes = usedBytes;
+    this.usedNamespace = usedNamespace;
     this.quotaInBytes = quotaInBytes;
-    this.quotaInCounts = quotaInCounts;
+    this.quotaInNamespace = quotaInNamespace;
   }
 
   /**
@@ -244,16 +248,24 @@
     return usedBytes;
   }
 
+  public long getUsedNamespace() {
+    return usedNamespace;
+  }
+
   public void incrUsedBytes(long bytes) {
     this.usedBytes += bytes;
   }
 
+  public void incrUsedNamespace(long namespaceToUse) {
+    this.usedNamespace += namespaceToUse;
+  }
+
   public long getQuotaInBytes() {
     return quotaInBytes;
   }
 
-  public long getQuotaInCounts() {
-    return quotaInCounts;
+  public long getQuotaInNamespace() {
+    return quotaInNamespace;
   }
 
   public boolean isLink() {
@@ -292,6 +304,8 @@
       auditMap.put(OzoneConsts.SOURCE_BUCKET, sourceBucket);
     }
     auditMap.put(OzoneConsts.USED_BYTES, String.valueOf(this.usedBytes));
+    auditMap.put(OzoneConsts.USED_NAMESPACE,
+        String.valueOf(this.usedNamespace));
     return auditMap;
   }
 
@@ -329,8 +343,9 @@
         .setAcls(acls)
         .addAllMetadata(metadata)
         .setUsedBytes(usedBytes)
+        .setUsedNamespace(usedNamespace)
         .setQuotaInBytes(quotaInBytes)
-        .setQuotaInCounts(quotaInCounts);
+        .setQuotaInNamespace(quotaInNamespace);
   }
 
   /**
@@ -351,8 +366,9 @@
     private String sourceVolume;
     private String sourceBucket;
     private long usedBytes;
+    private long usedNamespace;
     private long quotaInBytes;
-    private long quotaInCounts;
+    private long quotaInNamespace;
 
     public Builder() {
       //Default values
@@ -361,7 +377,7 @@
       this.storageType = StorageType.DISK;
       this.metadata = new HashMap<>();
       this.quotaInBytes = OzoneConsts.QUOTA_RESET;
-      this.quotaInCounts = OzoneConsts.QUOTA_RESET;
+      this.quotaInNamespace = OzoneConsts.QUOTA_RESET;
     }
 
     public Builder setVolumeName(String volume) {
@@ -451,13 +467,18 @@
       return this;
     }
 
+    public Builder setUsedNamespace(long quotaUsage) {
+      this.usedNamespace = quotaUsage;
+      return this;
+    }
+
     public Builder setQuotaInBytes(long quota) {
       this.quotaInBytes = quota;
       return this;
     }
 
-    public Builder setQuotaInCounts(long quota) {
-      this.quotaInCounts = quota;
+    public Builder setQuotaInNamespace(long quota) {
+      this.quotaInNamespace = quota;
       return this;
     }
 
@@ -475,7 +496,7 @@
       return new OmBucketInfo(volumeName, bucketName, acls, isVersionEnabled,
           storageType, creationTime, modificationTime, objectID, updateID,
           metadata, bekInfo, sourceVolume, sourceBucket, usedBytes,
-              quotaInBytes, quotaInCounts);
+          usedNamespace, quotaInBytes, quotaInNamespace);
     }
   }
 
@@ -494,9 +515,10 @@
         .setObjectID(objectID)
         .setUpdateID(updateID)
         .setUsedBytes(usedBytes)
+        .setUsedNamespace(usedNamespace)
         .addAllMetadata(KeyValueUtil.toProtobuf(metadata))
         .setQuotaInBytes(quotaInBytes)
-        .setQuotaInCounts(quotaInCounts);
+        .setQuotaInNamespace(quotaInNamespace);
     if (bekInfo != null && bekInfo.getKeyName() != null) {
       bib.setBeinfo(OMPBHelper.convert(bekInfo));
     }
@@ -526,7 +548,8 @@
         .setUsedBytes(bucketInfo.getUsedBytes())
         .setModificationTime(bucketInfo.getModificationTime())
         .setQuotaInBytes(bucketInfo.getQuotaInBytes())
-        .setQuotaInCounts(bucketInfo.getQuotaInCounts());
+        .setUsedNamespace(bucketInfo.getUsedNamespace())
+        .setQuotaInNamespace(bucketInfo.getQuotaInNamespace());
     if (bucketInfo.hasObjectID()) {
       obib.setObjectID(bucketInfo.getObjectID());
     }
@@ -562,8 +585,9 @@
         ", storageType='" + storageType + "'" +
         ", creationTime='" + creationTime + "'" +
         ", usedBytes='" + usedBytes + "'" +
+        ", usedNamespace='" + usedNamespace + "'" +
         ", quotaInBytes='" + quotaInBytes + "'" +
-        ", quotaInCounts='" + quotaInCounts + '\'' +
+        ", quotaInNamespace='" + quotaInNamespace + '\'' +
         sourceInfo +
         '}';
   }
@@ -587,6 +611,7 @@
         objectID == that.objectID &&
         updateID == that.updateID &&
         usedBytes == that.usedBytes &&
+        usedNamespace == that.usedNamespace &&
         Objects.equals(sourceVolume, that.sourceVolume) &&
         Objects.equals(sourceBucket, that.sourceBucket) &&
         Objects.equals(metadata, that.metadata) &&
@@ -614,8 +639,9 @@
         ", updateID=" + updateID +
         ", metadata=" + metadata +
         ", usedBytes=" + usedBytes +
+        ", usedNamespace=" + usedNamespace +
         ", quotaInBytes=" + quotaInBytes +
-        ", quotaInCounts=" + quotaInCounts +
+        ", quotaInNamespace=" + quotaInNamespace +
         '}';
   }
 }
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
index a990d1a..d9fe23a 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
@@ -18,33 +18,30 @@
 
 package org.apache.hadoop.ozone.om.helpers;
 
-import com.google.protobuf.ByteString;
-import org.apache.hadoop.ozone.OzoneAcl;
-import org.apache.hadoop.ozone.om.exceptions.OMException;
-import org.apache.hadoop.ozone.protocol.proto
-    .OzoneManagerProtocolProtos.OzoneAclInfo;
-import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo.OzoneAclScope;
-import org.apache.hadoop.ozone.protocol.proto
-    .OzoneManagerProtocolProtos.OzoneAclInfo.OzoneAclType;
-import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType;
-import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType;
-import org.apache.hadoop.security.UserGroupInformation;
-
-import java.util.BitSet;
-import java.util.Collection;
-import java.util.List;
-import java.util.LinkedList;
-import java.util.Map;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.Objects;
-import java.util.stream.Collectors;
-
 import static org.apache.hadoop.ozone.OzoneAcl.ZERO_BITSET;
 import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.INVALID_REQUEST;
 import static org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.ALL;
 import static org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.NONE;
 
+import com.google.protobuf.ByteString;
+import java.util.ArrayList;
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.stream.Collectors;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo.OzoneAclScope;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneAclInfo.OzoneAclType;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType;
+import org.apache.hadoop.security.UserGroupInformation;
+
 /**
  * This helper class keeps a map of all user and their permissions.
  */
@@ -100,30 +97,60 @@
   // Add a new acl to the map
   public void addAcl(OzoneAcl acl) throws OMException {
     Objects.requireNonNull(acl, "Acl should not be null.");
+    OzoneAclType aclType = OzoneAclType.valueOf(acl.getType().name());
     if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
-      defaultAclList.add(OzoneAcl.toProtobuf(acl));
+      addDefaultAcl(acl);
       return;
     }
 
-    OzoneAclType aclType = OzoneAclType.valueOf(acl.getType().name());
     if (!getAccessAclMap(aclType).containsKey(acl.getName())) {
       getAccessAclMap(aclType).put(acl.getName(), acl.getAclBitSet());
     } else {
-      // Check if we are adding new rights to existing acl.
-      BitSet temp = (BitSet) acl.getAclBitSet().clone();
-      BitSet curRights = (BitSet) getAccessAclMap(aclType).
-          get(acl.getName()).clone();
-      temp.or(curRights);
-
-      if (temp.equals(curRights)) {
-        // throw exception if acl is already added.
-        throw new OMException("Acl " + acl + " already exist.",
-            INVALID_REQUEST);
-      }
-      getAccessAclMap(aclType).replace(acl.getName(), temp);
+      BitSet curBitSet = getAccessAclMap(aclType).get(acl.getName());
+      BitSet bitSet = checkAndGet(acl, curBitSet);
+      getAccessAclMap(aclType).replace(acl.getName(), bitSet);
     }
   }
 
+  private void addDefaultAcl(OzoneAcl acl) throws OMException {
+    OzoneAclInfo ozoneAclInfo = OzoneAcl.toProtobuf(acl);
+    if (defaultAclList.contains(ozoneAclInfo)) {
+      aclExistsError(acl);
+    } else {
+      for (int i = 0; i < defaultAclList.size(); i++) {
+        OzoneAclInfo old = defaultAclList.get(i);
+        if (old.getType() == ozoneAclInfo.getType() && old.getName().equals(
+                ozoneAclInfo.getName())) {
+          BitSet curBitSet = BitSet.valueOf(old.getRights().toByteArray());
+          BitSet bitSet = checkAndGet(acl, curBitSet);
+          ozoneAclInfo = OzoneAclInfo.newBuilder(ozoneAclInfo).setRights(
+                  ByteString.copyFrom(bitSet.toByteArray())).build();
+          defaultAclList.remove(i);
+          defaultAclList.add(ozoneAclInfo);
+          return;
+        }
+      }
+    }
+    defaultAclList.add(ozoneAclInfo);
+  }
+
+  private void aclExistsError(OzoneAcl acl) throws OMException {
+    // throw exception if acl is already added.
+    throw new OMException("Acl " + acl + " already exist.", INVALID_REQUEST);
+  }
+
+  private BitSet checkAndGet(OzoneAcl acl, BitSet curBitSet)
+          throws OMException {
+    // Check if we are adding new rights to existing acl.
+    BitSet temp = (BitSet) acl.getAclBitSet().clone();
+    BitSet curRights = (BitSet) curBitSet.clone();
+    temp.or(curRights);
+    if (temp.equals(curRights)) {
+      aclExistsError(acl);
+    }
+    return temp;
+  }
+
   // Add a new acl to the map
   public void setAcls(List<OzoneAcl> acls) throws OMException {
     Objects.requireNonNull(acls, "Acls should not be null.");
@@ -175,7 +202,7 @@
   public void addAcl(OzoneAclInfo acl) throws OMException {
     Objects.requireNonNull(acl, "Acl should not be null.");
     if (acl.getAclScope().equals(OzoneAclInfo.OzoneAclScope.DEFAULT)) {
-      defaultAclList.add(acl);
+      addDefaultAcl(OzoneAcl.fromProtobuf(acl));
       return;
     }
 
@@ -183,9 +210,7 @@
       BitSet acls = BitSet.valueOf(acl.getRights().toByteArray());
       getAccessAclMap(acl.getType()).put(acl.getName(), acls);
     } else {
-      // throw exception if acl is already added.
-
-      throw new OMException("Acl " + acl + " already exist.", INVALID_REQUEST);
+      aclExistsError(OzoneAcl.fromProtobuf(acl));
     }
   }
 
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
index 13c67c8..559eebc 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
@@ -44,7 +44,8 @@
   private long creationTime;
   private long modificationTime;
   private long quotaInBytes;
-  private long quotaInCounts;
+  private long quotaInNamespace;
+  private long usedNamespace;
   private final OmOzoneAclMap aclMap;
 
   /**
@@ -53,7 +54,8 @@
    * @param ownerName  - Volume owner's name
    * @param volume - volume name
    * @param quotaInBytes - Volume Quota in bytes.
-   * @param quotaInCounts - Volume Quota in counts.
+   * @param quotaInNamespace - Volume Quota in counts.
+   * @param usedNamespace - Volume Namespace Quota Usage in counts.
    * @param metadata - metadata map for custom key/value data.
    * @param aclMap - User to access rights map.
    * @param creationTime - Volume creation time.
@@ -64,14 +66,15 @@
   @SuppressWarnings({"checkstyle:ParameterNumber", "This is invoked from a " +
       "builder."})
   private OmVolumeArgs(String adminName, String ownerName, String volume,
-      long quotaInBytes, long quotaInCounts, Map<String, String> metadata,
-      OmOzoneAclMap aclMap, long creationTime, long modificationTime,
-      long objectID, long updateID) {
+      long quotaInBytes, long quotaInNamespace, long usedNamespace,
+      Map<String, String> metadata, OmOzoneAclMap aclMap, long creationTime,
+      long modificationTime, long objectID, long updateID) {
     this.adminName = adminName;
     this.ownerName = ownerName;
     this.volume = volume;
     this.quotaInBytes = quotaInBytes;
-    this.quotaInCounts = quotaInCounts;
+    this.quotaInNamespace = quotaInNamespace;
+    this.usedNamespace = usedNamespace;
     this.metadata = metadata;
     this.aclMap = aclMap;
     this.creationTime = creationTime;
@@ -89,8 +92,8 @@
     this.quotaInBytes = quotaInBytes;
   }
 
-  public void setQuotaInCounts(long quotaInCounts) {
-    this.quotaInCounts= quotaInCounts;
+  public void setQuotaInNamespace(long quotaInNamespace) {
+    this.quotaInNamespace= quotaInNamespace;
   }
 
   public void setCreationTime(long time) {
@@ -165,8 +168,8 @@
    * Returns Quota in counts.
    * @return long, Quota in counts.
    */
-  public long getQuotaInCounts() {
-    return quotaInCounts;
+  public long getQuotaInNamespace() {
+    return quotaInNamespace;
   }
 
   public OmOzoneAclMap getAclMap() {
@@ -174,6 +177,21 @@
   }
 
   /**
+   * increase used bucket namespace by n.
+   */
+  public void incrUsedNamespace(long n) {
+    usedNamespace += n;
+  }
+
+  /**
+   * Returns used bucket namespace.
+   * @return usedNamespace
+   */
+  public long getUsedNamespace() {
+    return usedNamespace;
+  }
+
+  /**
    * Returns new builder class that builds a OmVolumeArgs.
    *
    * @return Builder
@@ -192,8 +210,10 @@
     auditMap.put(OzoneConsts.MODIFICATION_TIME,
         String.valueOf(this.modificationTime));
     auditMap.put(OzoneConsts.QUOTA_IN_BYTES, String.valueOf(this.quotaInBytes));
-    auditMap.put(OzoneConsts.QUOTA_IN_COUNTS,
-        String.valueOf(this.quotaInCounts));
+    auditMap.put(OzoneConsts.QUOTA_IN_NAMESPACE,
+        String.valueOf(this.quotaInNamespace));
+    auditMap.put(OzoneConsts.USED_NAMESPACE,
+        String.valueOf(this.usedNamespace));
     auditMap.put(OzoneConsts.OBJECT_ID, String.valueOf(this.getObjectID()));
     auditMap.put(OzoneConsts.UPDATE_ID, String.valueOf(this.getUpdateID()));
     return auditMap;
@@ -226,7 +246,8 @@
     private long creationTime;
     private long modificationTime;
     private long quotaInBytes;
-    private long quotaInCounts;
+    private long quotaInNamespace;
+    private long usedNamespace;
     private Map<String, String> metadata;
     private OmOzoneAclMap aclMap;
     private long objectID;
@@ -259,6 +280,8 @@
     public Builder() {
       metadata = new HashMap<>();
       aclMap = new OmOzoneAclMap();
+      quotaInBytes = OzoneConsts.QUOTA_RESET;
+      quotaInNamespace = OzoneConsts.QUOTA_RESET;
     }
 
     public Builder setAdminName(String admin) {
@@ -291,8 +314,13 @@
       return this;
     }
 
-    public Builder setQuotaInCounts(long quotaCounts) {
-      this.quotaInCounts = quotaCounts;
+    public Builder setQuotaInNamespace(long quotaNamespace) {
+      this.quotaInNamespace = quotaNamespace;
+      return this;
+    }
+
+    public Builder setUsedNamespace(long namespaceUsage) {
+      this.usedNamespace = namespaceUsage;
       return this;
     }
 
@@ -322,8 +350,8 @@
       Preconditions.checkNotNull(ownerName);
       Preconditions.checkNotNull(volume);
       return new OmVolumeArgs(adminName, ownerName, volume, quotaInBytes,
-          quotaInCounts, metadata, aclMap, creationTime, modificationTime,
-          objectID, updateID);
+          quotaInNamespace, usedNamespace, metadata, aclMap, creationTime,
+          modificationTime, objectID, updateID);
     }
 
   }
@@ -335,7 +363,8 @@
         .setOwnerName(ownerName)
         .setVolume(volume)
         .setQuotaInBytes(quotaInBytes)
-        .setQuotaInCounts(quotaInCounts)
+        .setQuotaInNamespace(quotaInNamespace)
+        .setUsedNamespace(usedNamespace)
         .addAllMetadata(KeyValueUtil.toProtobuf(metadata))
         .addAllVolumeAcls(aclList)
         .setCreationTime(
@@ -355,7 +384,8 @@
         volInfo.getOwnerName(),
         volInfo.getVolume(),
         volInfo.getQuotaInBytes(),
-        volInfo.getQuotaInCounts(),
+        volInfo.getQuotaInNamespace(),
+        volInfo.getUsedNamespace(),
         KeyValueUtil.getFromProtobuf(volInfo.getMetadataList()),
         aclMap,
         volInfo.getCreationTime(),
@@ -372,6 +402,7 @@
         ", owner='" + ownerName + '\'' +
         ", creationTime='" + creationTime + '\'' +
         ", quotaInBytes='" + quotaInBytes + '\'' +
+        ", usedNamespace='" + usedNamespace + '\'' +
         '}';
   }
 
@@ -387,7 +418,7 @@
     OmOzoneAclMap cloneAclMap = aclMap.copyObject();
 
     return new OmVolumeArgs(adminName, ownerName, volume, quotaInBytes,
-        quotaInCounts, cloneMetadata, cloneAclMap, creationTime,
-        modificationTime, objectID, updateID);
+        quotaInNamespace, usedNamespace, cloneMetadata, cloneAclMap,
+        creationTime, modificationTime, objectID, updateID);
   }
 }
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
index 374a567..706d126 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
@@ -93,11 +93,11 @@
   /**
    * Changes the Quota on a volume.
    * @param volume - Name of the volume.
-   * @param quotaInCounts - Volume quota in counts.
+   * @param quotaInNamespace - Volume quota in counts.
    * @param quotaInBytes - Volume quota in bytes.
    * @throws IOException
    */
-  void setQuota(String volume, long quotaInCounts, long quotaInBytes)
+  void setQuota(String volume, long quotaInNamespace, long quotaInBytes)
       throws IOException;
 
   /**
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
index a26c436..61dbccc 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
@@ -271,18 +271,18 @@
    * Changes the Quota on a volume.
    *
    * @param volume - Name of the volume.
-   * @param quotaInCounts - Volume quota in counts.
+   * @param quotaInNamespace - Volume quota in counts.
    * @param quotaInBytes - Volume quota in bytes.
    * @throws IOException
    */
   @Override
-  public void setQuota(String volume, long quotaInCounts,
+  public void setQuota(String volume, long quotaInNamespace,
       long quotaInBytes) throws IOException {
     SetVolumePropertyRequest.Builder req =
         SetVolumePropertyRequest.newBuilder();
     req.setVolumeName(volume)
         .setQuotaInBytes(quotaInBytes)
-        .setQuotaInCounts(quotaInCounts);
+        .setQuotaInNamespace(quotaInNamespace);
 
     OMRequest omRequest = createOMRequest(Type.SetVolumeProperty)
         .setSetVolumePropertyRequest(req)
diff --git a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
index 1cdea8b..9bd8398 100644
--- a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
+++ b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
@@ -21,6 +21,7 @@
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 import java.text.ParseException;
 import java.text.SimpleDateFormat;
 import java.util.Locale;
@@ -42,8 +43,7 @@
 @InterfaceAudience.Private
 public final class OzoneUtils {
 
-  public static final String ENCODING_NAME = "UTF-8";
-  public static final Charset ENCODING = Charset.forName(ENCODING_NAME);
+  public static final Charset ENCODING = StandardCharsets.UTF_8;
 
   private OzoneUtils() {
     // Never constructed
diff --git a/hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmOzoneAclMap.java b/hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmOzoneAclMap.java
new file mode 100644
index 0000000..6d8f685
--- /dev/null
+++ b/hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/helpers/TestOmOzoneAclMap.java
@@ -0,0 +1,56 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Class to test {@link OmOzoneAclMap}.
+ */
+public class TestOmOzoneAclMap {
+
+  @Test
+  public void testAddAcl() throws Exception {
+    OmOzoneAclMap map = new OmOzoneAclMap();
+    map.addAcl(OzoneAcl.parseAcl("user:masstter:rx[DEFAULT]"));
+    map.addAcl(OzoneAcl.parseAcl("user:masstter:rw[DEFAULT]"));
+
+    //[user:masstter:rwx[DEFAULT]]
+    Assert.assertEquals(1, map.getAcl().size());
+    Assert.assertEquals(1, map.getDefaultAclList().size());
+
+    map = new OmOzoneAclMap();
+    map.addAcl(OzoneAcl.parseAcl("user:masstter:rx"));
+    map.addAcl(OzoneAcl.parseAcl("user:masstter:rw[ACCESS]"));
+
+    //[user:masstter:rwx[ACCESS]]
+    Assert.assertEquals(1, map.getAcl().size());
+    Assert.assertEquals(0, map.getDefaultAclList().size());
+
+    map = new OmOzoneAclMap();
+    map.addAcl(OzoneAcl.parseAcl("user:masstter:rwx[DEFAULT]"));
+    map.addAcl(OzoneAcl.parseAcl("user:masstter:rwx[ACCESS]"));
+
+    //[user:masstter:rwx[ACCESS], user:masstter:rwx[DEFAULT]]
+    Assert.assertEquals(2, map.getAcl().size());
+    Assert.assertEquals(1, map.getDefaultAclList().size());
+
+  }
+}
diff --git a/hadoop-ozone/csi/src/main/java/org/apache/hadoop/ozone/csi/NodeService.java b/hadoop-ozone/csi/src/main/java/org/apache/hadoop/ozone/csi/NodeService.java
index 45784a4..0665a79 100644
--- a/hadoop-ozone/csi/src/main/java/org/apache/hadoop/ozone/csi/NodeService.java
+++ b/hadoop-ozone/csi/src/main/java/org/apache/hadoop/ozone/csi/NodeService.java
@@ -20,6 +20,7 @@
 import java.io.IOException;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
+import java.nio.charset.StandardCharsets;
 import java.nio.file.Files;
 import java.nio.file.Paths;
 import java.util.concurrent.TimeUnit;
@@ -86,8 +87,8 @@
     exec.waitFor(10, TimeUnit.SECONDS);
 
     LOG.info("Command is executed with  stdout: {}, stderr: {}",
-        IOUtils.toString(exec.getInputStream(), "UTF-8"),
-        IOUtils.toString(exec.getErrorStream(), "UTF-8"));
+        IOUtils.toString(exec.getInputStream(), StandardCharsets.UTF_8),
+        IOUtils.toString(exec.getErrorStream(), StandardCharsets.UTF_8));
     if (exec.exitValue() != 0) {
       throw new RuntimeException(String
           .format("Return code of the command %s was %d", command,
diff --git a/hadoop-ozone/dev-support/checks/acceptance.sh b/hadoop-ozone/dev-support/checks/acceptance.sh
index 99d8d52..a96aff7 100755
--- a/hadoop-ozone/dev-support/checks/acceptance.sh
+++ b/hadoop-ozone/dev-support/checks/acceptance.sh
@@ -18,7 +18,7 @@
 
 REPORT_DIR=${OUTPUT_DIR:-"$DIR/../../../target/acceptance"}
 
-OZONE_VERSION=$(grep "<ozone.version>" "pom.xml" | sed 's/<[^>]*>//g'|  sed 's/^[ \t]*//')
+OZONE_VERSION=$(mvn help:evaluate -Dexpression=ozone.version -q -DforceStdout)
 DIST_DIR="$DIR/../../dist/target/ozone-$OZONE_VERSION"
 
 if [ ! -d "$DIST_DIR" ]; then
diff --git a/hadoop-ozone/dev-support/checks/blockade.sh b/hadoop-ozone/dev-support/checks/blockade.sh
index a48d2b5..3ba41bd 100755
--- a/hadoop-ozone/dev-support/checks/blockade.sh
+++ b/hadoop-ozone/dev-support/checks/blockade.sh
@@ -17,7 +17,7 @@
 DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
 cd "$DIR/../../.." || exit 1
 
-OZONE_VERSION=$(grep "<ozone.version>" "$DIR/../../pom.xml" | sed 's/<[^>]*>//g'|  sed 's/^[ \t]*//')
+OZONE_VERSION=$(mvn help:evaluate -Dexpression=ozone.version -q -DforceStdout)
 cd "$DIR/../../dist/target/ozone-$OZONE_VERSION/tests" || exit 1
 
 source ${DIR}/../../dist/target/ozone-${OZONE_VERSION}/compose/ozoneblockade/.env
diff --git a/hadoop-ozone/dev-support/checks/kubernetes.sh b/hadoop-ozone/dev-support/checks/kubernetes.sh
index 7f68da1..ea66313 100755
--- a/hadoop-ozone/dev-support/checks/kubernetes.sh
+++ b/hadoop-ozone/dev-support/checks/kubernetes.sh
@@ -18,7 +18,7 @@
 
 REPORT_DIR=${OUTPUT_DIR:-"$DIR/../../../target/kubernetes"}
 
-OZONE_VERSION=$(grep "<ozone.version>" "pom.xml" | sed 's/<[^>]*>//g'|  sed 's/^[ \t]*//')
+OZONE_VERSION=$(mvn help:evaluate -Dexpression=ozone.version -q -DforceStdout)
 DIST_DIR="$DIR/../../dist/target/ozone-$OZONE_VERSION"
 
 if [ ! -d "$DIST_DIR" ]; then
diff --git a/hadoop-ozone/dev-support/intellij/ozone-site.xml b/hadoop-ozone/dev-support/intellij/ozone-site.xml
index 3fde850..e691d91 100644
--- a/hadoop-ozone/dev-support/intellij/ozone-site.xml
+++ b/hadoop-ozone/dev-support/intellij/ozone-site.xml
@@ -67,4 +67,8 @@
     <name>ozone.recon.db.dir</name>
     <value>/tmp/recon</value>
   </property>
+  <property>
+    <name>datanode.replication.port</name>
+    <value>0</value>
+  </property>
 </configuration>
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-ha/docker-config b/hadoop-ozone/dist/src/main/compose/ozone-ha/docker-config
index 95f840d..64f07e1 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-ha/docker-config
+++ b/hadoop-ozone/dist/src/main/compose/ozone-ha/docker-config
@@ -31,5 +31,6 @@
 OZONE-SITE.XML_ozone.scm.client.address=scm
 OZONE-SITE.XML_ozone.client.failover.max.attempts=6
 OZONE-SITE.XML_hdds.datanode.dir=/data/hdds
+OZONE-SITE.XML_ozone.datanode.pipeline.limit=1
 
 no_proxy=om1,om2,om3,scm,s3g,recon,kdc,localhost,127.0.0.1
diff --git a/hadoop-ozone/dist/src/main/compose/ozone-mr/test.sh b/hadoop-ozone/dist/src/main/compose/ozone-mr/test.sh
index 3a18d4d..c520348 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone-mr/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozone-mr/test.sh
@@ -35,3 +35,4 @@
   copy_results "${d}" "${ALL_RESULT_DIR}"
 done
 
+exit ${RESULT}
diff --git a/hadoop-ozone/dist/src/main/compose/ozone/README.md b/hadoop-ozone/dist/src/main/compose/ozone/README.md
index c28f832..7ffce11 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone/README.md
+++ b/hadoop-ozone/dist/src/main/compose/ozone/README.md
@@ -18,7 +18,7 @@
 
 There are two optional add-ons:
 
- * monitoring: adds Grafana, Jaeger and Prometheus sercvies, and configures Ozone to work with them
+ * monitoring: adds Grafana, Jaeger and Prometheus services, and configures Ozone to work with them
  * profiling: allows sampling Ozone CPU/memory using [async-profiler](https://github.com/jvm-profiling-tools/async-profiler)
 
 ## How to start
diff --git a/hadoop-ozone/dist/src/main/compose/ozone/docker-config b/hadoop-ozone/dist/src/main/compose/ozone/docker-config
index b047195..d4767ed 100644
--- a/hadoop-ozone/dist/src/main/compose/ozone/docker-config
+++ b/hadoop-ozone/dist/src/main/compose/ozone/docker-config
@@ -15,6 +15,7 @@
 # limitations under the License.
 
 CORE-SITE.XML_fs.defaultFS=ofs://om
+CORE-SITE.XML_fs.trash.interval=1
 
 OZONE-SITE.XML_ozone.om.address=om
 OZONE-SITE.XML_ozone.om.http-address=om:9874
@@ -30,5 +31,6 @@
 OZONE-SITE.XML_hdds.datanode.dir=/data/hdds
 OZONE-SITE.XML_ozone.recon.address=recon:9891
 OZONE-SITE.XML_ozone.recon.om.snapshot.task.interval.delay=1m
+OZONE-SITE.XML_ozone.datanode.pipeline.limit=1
 
 no_proxy=om,scm,s3g,recon,kdc,localhost,127.0.0.1
diff --git a/hadoop-ozone/dist/src/main/compose/ozonescripts/start.sh b/hadoop-ozone/dist/src/main/compose/ozonescripts/start.sh
index 49fc506..2ce768d 100755
--- a/hadoop-ozone/dist/src/main/compose/ozonescripts/start.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozonescripts/start.sh
@@ -17,10 +17,10 @@
 set -x
 docker-compose ps | grep datanode | awk '{print $1}' | xargs -n1  docker inspect --format '{{ .Config.Hostname }}' > ../../etc/hadoop/workers
 docker-compose ps | grep ozonescripts | awk '{print $1}' | xargs -I CONTAINER -n1 docker exec CONTAINER cp /opt/hadoop/etc/hadoop/workers /etc/hadoop/workers
-docker-compose exec scm /opt/hadoop/bin/ozone scm --init
-docker-compose exec scm /opt/hadoop/sbin/start-ozone.sh
+docker-compose exec -T scm /opt/hadoop/bin/ozone scm --init
+docker-compose exec -T scm /opt/hadoop/sbin/start-ozone.sh
 #We need a running SCM for om objectstore creation
 #TODO create a utility to wait for the startup
 sleep 10
-docker-compose exec om /opt/hadoop/bin/ozone om --init
-docker-compose exec scm /opt/hadoop/sbin/start-ozone.sh
+docker-compose exec -T om /opt/hadoop/bin/ozone om --init
+docker-compose exec -T scm /opt/hadoop/sbin/start-ozone.sh
diff --git a/hadoop-ozone/dist/src/main/compose/ozonescripts/stop.sh b/hadoop-ozone/dist/src/main/compose/ozonescripts/stop.sh
index a3ce08a..012fffb 100755
--- a/hadoop-ozone/dist/src/main/compose/ozonescripts/stop.sh
+++ b/hadoop-ozone/dist/src/main/compose/ozonescripts/stop.sh
@@ -14,4 +14,4 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-docker-compose exec scm /opt/hadoop/sbin/stop-ozone.sh
+docker-compose exec -T scm /opt/hadoop/sbin/stop-ozone.sh
diff --git a/hadoop-ozone/dist/src/main/compose/ozonescripts/test.sh b/hadoop-ozone/dist/src/main/compose/ozonescripts/test.sh
new file mode 100755
index 0000000..6b4ef15
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/ozonescripts/test.sh
@@ -0,0 +1,46 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#suite:misc
+
+COMPOSE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+export COMPOSE_DIR
+
+export SECURITY_ENABLED=false
+export OZONE_REPLICATION_FACTOR=1
+
+# shellcheck source=/dev/null
+source "$COMPOSE_DIR/../testlib.sh"
+
+wait_for_safemode_exit() {
+  wait_for_port scm 22 30
+  wait_for_port om 22 30
+  wait_for_port datanode 22 30
+}
+
+start_docker_env 1
+
+${COMPOSE_DIR}/start.sh
+${COMPOSE_DIR}/ps.sh
+
+execute_robot_test scm admincli/pipeline.robot
+
+${COMPOSE_DIR}/stop.sh
+
+stop_docker_env
+
+generate_report
diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
index dd689af..110731f 100644
--- a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
@@ -15,6 +15,7 @@
 # limitations under the License.
 
 CORE-SITE.XML_fs.defaultFS=ofs://om
+CORE-SITE.XML_fs.trash.interval=1
 
 OZONE-SITE.XML_ozone.om.address=om
 OZONE-SITE.XML_ozone.om.http-address=om:9874
@@ -29,6 +30,7 @@
 OZONE-SITE.XML_ozone.scm.client.address=scm
 OZONE-SITE.XML_hdds.block.token.enabled=true
 OZONE-SITE.XML_ozone.replication=3
+OZONE-SITE.XML_ozone.datanode.pipeline.limit=1
 
 OZONE-SITE.XML_ozone.recon.om.snapshot.task.interval.delay=1m
 OZONE-SITE.XML_ozone.recon.db.dir=/data/metadata/recon
diff --git a/hadoop-ozone/dist/src/main/compose/testlib.sh b/hadoop-ozone/dist/src/main/compose/testlib.sh
index b122479..981536a 100755
--- a/hadoop-ozone/dist/src/main/compose/testlib.sh
+++ b/hadoop-ozone/dist/src/main/compose/testlib.sh
@@ -22,7 +22,7 @@
 SMOKETEST_DIR_INSIDE="${OZONE_DIR:-/opt/hadoop}/smoketest"
 
 OM_HA_PARAM=""
-if [[ -n "${OM_SERVICE_ID}" ]]; then
+if [[ -n "${OM_SERVICE_ID}" ]] && [[ "${OM_SERVICE_ID}" != "om" ]]; then
   OM_HA_PARAM="--om-service-id=${OM_SERVICE_ID}"
 else
   OM_SERVICE_ID=om
diff --git a/hadoop-ozone/dist/src/main/compose/upgrade/test.sh b/hadoop-ozone/dist/src/main/compose/upgrade/test.sh
index 7284bf7..1c16c81 100644
--- a/hadoop-ozone/dist/src/main/compose/upgrade/test.sh
+++ b/hadoop-ozone/dist/src/main/compose/upgrade/test.sh
@@ -45,7 +45,8 @@
 
 # prepare pre-upgrade cluster
 start_docker_env
-execute_robot_test scm topology/loaddata.robot
+execute_robot_test scm -v PREFIX:pre freon/generate.robot
+execute_robot_test scm -v PREFIX:pre freon/validate.robot
 KEEP_RUNNING=false stop_docker_env
 
 # run upgrade scripts
@@ -63,7 +64,10 @@
 # re-start cluster with new version and check after upgrade
 export OZONE_KEEP_RESULTS=true
 start_docker_env
-execute_robot_test scm topology/readdata.robot
+execute_robot_test scm -v PREFIX:pre freon/validate.robot
+# test write key to old bucket after upgrade
+execute_robot_test scm -v PREFIX:post freon/generate.robot
+execute_robot_test scm -v PREFIX:post freon/validate.robot
 stop_docker_env
 
 generate_report
diff --git a/hadoop-ozone/dist/src/main/dockerlibexec/transformation.py b/hadoop-ozone/dist/src/main/dockerlibexec/transformation.py
index 5e708ce..a6f68d2 100755
--- a/hadoop-ozone/dist/src/main/dockerlibexec/transformation.py
+++ b/hadoop-ozone/dist/src/main/dockerlibexec/transformation.py
@@ -91,7 +91,7 @@
   """transform to environment variables"""
   result = ""
   props = process_properties(content)
-  for key, val in props:
+  for key, val in props.items():
     result += "{}={}\n".format(key, val)
   return result
 
@@ -100,7 +100,7 @@
   """transform to shell"""
   result = ""
   props = process_properties(content)
-  for key, val in props:
+  for key, val in props.items():
     result += "export {}=\"{}\"\n".format(key, val)
   return result
 
@@ -109,7 +109,7 @@
   """transform to config"""
   result = ""
   props = process_properties(content)
-  for key, val in props:
+  for key, val in props.items():
     result += "{}={}\n".format(key, val)
   return result
 
@@ -118,7 +118,7 @@
   """transform to configuration"""
   result = ""
   props = process_properties(content)
-  for key, val in props:
+  for key, val in props.items():
     result += "export {}={}\n".format(key, val)
   return result
 
diff --git a/hadoop-ozone/dist/src/main/k8s/definitions/ozone/config.yaml b/hadoop-ozone/dist/src/main/k8s/definitions/ozone/config.yaml
index 124f72f..7b65a3e 100644
--- a/hadoop-ozone/dist/src/main/k8s/definitions/ozone/config.yaml
+++ b/hadoop-ozone/dist/src/main/k8s/definitions/ozone/config.yaml
@@ -27,6 +27,7 @@
   OZONE-SITE.XML_ozone.scm.client.address: "scm-0.scm"
   OZONE-SITE.XML_ozone.scm.names: "scm-0.scm"
   OZONE-SITE.XML_hdds.scm.safemode.min.datanode: "3"
+  OZONE-SITE.XML_ozone.datanode.pipeline.limit: "1"
   LOG4J.PROPERTIES_log4j.rootLogger: "INFO, stdout"
   LOG4J.PROPERTIES_log4j.appender.stdout: "org.apache.log4j.ConsoleAppender"
   LOG4J.PROPERTIES_log4j.appender.stdout.layout: "org.apache.log4j.PatternLayout"
diff --git a/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell-lib.robot b/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell-lib.robot
index 44f3f00..d65b8fd 100644
--- a/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell-lib.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell-lib.robot
@@ -31,66 +31,66 @@
     [arguments]     ${protocol}         ${server}       ${volume}
     ${result} =     Execute And Ignore Error    ozone sh volume info ${protocol}${server}/${volume}
                     Should contain      ${result}       VOLUME_NOT_FOUND
-    ${result} =     Execute             ozone sh volume create ${protocol}${server}/${volume} --space-quota 100TB --count-quota 100
+    ${result} =     Execute             ozone sh volume create ${protocol}${server}/${volume} --space-quota 100TB --namespace-quota 100
                     Should not contain  ${result}       Failed
     ${result} =     Execute             ozone sh volume list ${protocol}${server}/ | jq -r '. | select(.name=="${volume}")'
                     Should contain      ${result}       creationTime
     ${result} =     Execute             ozone sh volume list | jq -r '. | select(.name=="${volume}")'
                     Should contain      ${result}       creationTime
 # TODO: Disable updating the owner, acls should be used to give access to other user.
-                    Execute             ozone sh volume setquota ${protocol}${server}/${volume} --space-quota 10TB --count-quota 100
+                    Execute             ozone sh volume setquota ${protocol}${server}/${volume} --space-quota 10TB --namespace-quota 100
 #    ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.volumeName=="${volume}") | .owner | .name'
 #                    Should Be Equal     ${result}       bill
     ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInBytes'
                     Should Be Equal     ${result}       10995116277760
-                    Execute             ozone sh bucket create ${protocol}${server}/${volume}/bb1 --space-quota 10TB --count-quota 100
+                    Execute             ozone sh bucket create ${protocol}${server}/${volume}/bb1 --space-quota 10TB --namespace-quota 100
     ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .storageType'
                     Should Be Equal     ${result}       DISK
     ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInBytes'
                     Should Be Equal     ${result}       10995116277760
-    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInCounts'
+    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInNamespace'
                     Should Be Equal     ${result}       100
-                    Execute             ozone sh bucket setquota ${protocol}${server}/${volume}/bb1 --space-quota 1TB --count-quota 1000
+                    Execute             ozone sh bucket setquota ${protocol}${server}/${volume}/bb1 --space-quota 1TB --namespace-quota 1000
     ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInBytes'
                     Should Be Equal     ${result}       1099511627776
-    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInCounts'
+    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInNamespace'
                     Should Be Equal     ${result}       1000
     ${result} =     Execute             ozone sh bucket list ${protocol}${server}/${volume}/ | jq -r '. | select(.name=="bb1") | .volumeName'
                     Should Be Equal     ${result}       ${volume}
                     Run Keyword         Test key handling       ${protocol}       ${server}       ${volume}
-                    Execute             ozone sh bucket clrquota --space-quota ${protocol}${server}/${volume}/bb1
-    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInBytes'
-                    Should Be Equal     ${result}       -1
-                    Execute             ozone sh bucket clrquota --count-quota ${protocol}${server}/${volume}/bb1
-    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInCounts'
-                    Should Be Equal     ${result}       -1
                     Execute             ozone sh volume clrquota --space-quota ${protocol}${server}/${volume}
     ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInBytes'
                     Should Be Equal     ${result}       -1
-                    Execute             ozone sh volume clrquota --count-quota ${protocol}${server}/${volume}
-    ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInCounts'
+                    Execute             ozone sh volume clrquota --namespace-quota ${protocol}${server}/${volume}
+    ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInNamespace'
+                    Should Be Equal     ${result}       -1
+                    Execute             ozone sh bucket clrquota --space-quota ${protocol}${server}/${volume}/bb1
+    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInBytes'
+                    Should Be Equal     ${result}       -1
+                    Execute             ozone sh bucket clrquota --namespace-quota ${protocol}${server}/${volume}/bb1
+    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInNamespace'
                     Should Be Equal     ${result}       -1
                     Execute             ozone sh bucket delete ${protocol}${server}/${volume}/bb1
                     Execute             ozone sh volume delete ${protocol}${server}/${volume}
                     Execute             ozone sh volume create ${protocol}${server}/${volume}
     ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInBytes'
                     Should Be Equal     ${result}       -1
-    ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInCounts'
+    ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInNamespace'
                     Should Be Equal     ${result}       -1
                     Execute             ozone sh bucket create ${protocol}${server}/${volume}/bb1
     ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInBytes'
                     Should Be Equal     ${result}       -1
-    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInCounts'
+    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInNamespace'
                     Should Be Equal     ${result}       -1
-                    Execute             ozone sh volume setquota ${protocol}${server}/${volume} --space-quota 0TB --count-quota 0
+                    Execute             ozone sh volume setquota ${protocol}${server}/${volume} --space-quota 0TB --namespace-quota 0
     ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInBytes'
                     Should Be Equal     ${result}       -1
-    ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInCounts'
+    ${result} =     Execute             ozone sh volume info ${protocol}${server}/${volume} | jq -r '. | select(.name=="${volume}") | .quotaInNamespace'
                     Should Be Equal     ${result}       -1
-                    Execute             ozone sh bucket setquota ${protocol}${server}/${volume}/bb1 --space-quota 0TB --count-quota 0
+                    Execute             ozone sh bucket setquota ${protocol}${server}/${volume}/bb1 --space-quota 0TB --namespace-quota 0
     ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInBytes'
                     Should Be Equal     ${result}       -1
-    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInCounts'
+    ${result} =     Execute             ozone sh bucket info ${protocol}${server}/${volume}/bb1 | jq -r '. | select(.name=="bb1") | .quotaInNamespace'
                     Should Be Equal     ${result}       -1
                     Execute             ozone sh bucket delete ${protocol}${server}/${volume}/bb1
                     Execute             ozone sh volume delete ${protocol}${server}/${volume}
diff --git a/hadoop-ozone/dist/src/main/smoketest/createbucketenv.robot b/hadoop-ozone/dist/src/main/smoketest/createbucketenv.robot
index 801f553..8a79fd2 100644
--- a/hadoop-ozone/dist/src/main/smoketest/createbucketenv.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/createbucketenv.robot
@@ -27,7 +27,7 @@
 
 *** Keywords ***
 Create volume
-    ${result} =     Execute             ozone sh volume create /${volume} --user hadoop --space-quota 100TB --count-quota 100
+    ${result} =     Execute             ozone sh volume create /${volume} --user hadoop --space-quota 100TB --namespace-quota 100
                     Should not contain  ${result}       Failed
 Create bucket
                     Execute             ozone sh bucket create /${volume}/${bucket}
diff --git a/hadoop-ozone/dist/src/main/smoketest/createmrenv.robot b/hadoop-ozone/dist/src/main/smoketest/createmrenv.robot
index a391909..89fa883 100644
--- a/hadoop-ozone/dist/src/main/smoketest/createmrenv.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/createmrenv.robot
@@ -29,7 +29,7 @@
 
 *** Keywords ***
 Create volume
-    ${result} =     Execute             ozone sh volume create /${volume} --user hadoop --space-quota 100TB --count-quota 100
+    ${result} =     Execute             ozone sh volume create /${volume} --user hadoop --space-quota 100TB --namespace-quota 100
                     Should not contain  ${result}       Failed
 Create bucket
                     Execute             ozone sh bucket create /${volume}/${bucket}
diff --git a/hadoop-ozone/dist/src/main/smoketest/debug/ozone-debug.robot b/hadoop-ozone/dist/src/main/smoketest/debug/ozone-debug.robot
index 1ba0511..a70e2e7 100644
--- a/hadoop-ozone/dist/src/main/smoketest/debug/ozone-debug.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/debug/ozone-debug.robot
@@ -23,7 +23,7 @@
 
 *** Keywords ***
 Write key
-    Execute             ozone sh volume create o3://om/vol1 --space-quota 100TB --count-quota 100
+    Execute             ozone sh volume create o3://om/vol1 --space-quota 100TB --namespace-quota 100
     Execute             ozone sh bucket create o3://om/vol1/bucket1
     Execute             ozone sh key put o3://om/vol1/bucket1/debugKey /opt/hadoop/NOTICE.txt
 
diff --git a/hadoop-ozone/dist/src/main/smoketest/freon/freon.robot b/hadoop-ozone/dist/src/main/smoketest/freon/freon.robot
deleted file mode 100644
index 74c1a15..0000000
--- a/hadoop-ozone/dist/src/main/smoketest/freon/freon.robot
+++ /dev/null
@@ -1,37 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-*** Settings ***
-Documentation       Smoketest ozone cluster startup
-Library             OperatingSystem
-Resource            ../commonlib.robot
-Test Timeout        5 minutes
-
-*** Test Cases ***
-Freon Randomkey Generator
-    ${result} =        Execute              ozone freon rk ${OM_HA_PARAM} --num-of-volumes=1 --num-of-buckets=1 --num-of-keys=1 --num-of-threads=1
-                       Wait Until Keyword Succeeds      3min       10sec     Should contain   ${result}   Number of Keys added: 1
-
-Freon Ozone Key Generator
-    ${result} =        Execute              ozone freon ockg ${OM_HA_PARAM} -t=1 -n=1
-                       Wait Until Keyword Succeeds      3min       10sec     Should contain   ${result}   Successful executions: 1
-
-Freon OM Key Generator
-    ${result} =        Execute              ozone freon omkg ${OM_HA_PARAM} -t=1 -n=1
-                       Wait Until Keyword Succeeds      3min       10sec     Should contain   ${result}   Successful executions: 1
-
-Freon OM Bucket Generator
-    ${result} =        Execute              ozone freon ombg ${OM_HA_PARAM} -t=1 -n=1
-                       Wait Until Keyword Succeeds      3min       10sec     Should contain   ${result}   Successful executions: 1
\ No newline at end of file
diff --git a/hadoop-ozone/dist/src/main/smoketest/freon/generate.robot b/hadoop-ozone/dist/src/main/smoketest/freon/generate.robot
new file mode 100644
index 0000000..de1df10
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/smoketest/freon/generate.robot
@@ -0,0 +1,35 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test freon data generation commands
+Resource            ../lib/os.robot
+Test Timeout        5 minutes
+
+*** Variables ***
+${PREFIX}    ${EMPTY}
+
+*** Test Cases ***
+Ozone Client Key Generator
+    ${result} =        Execute          ozone freon ockg ${OM_HA_PARAM} -t=1 -n=1 -p ockg${PREFIX}
+                       Should contain   ${result}   Successful executions: 1
+
+OM Key Generator
+    ${result} =        Execute          ozone freon omkg ${OM_HA_PARAM} -t=1 -n=1 -p omkg${PREFIX}
+                       Should contain   ${result}   Successful executions: 1
+
+OM Bucket Generator
+    ${result} =        Execute          ozone freon ombg ${OM_HA_PARAM} -t=1 -n=1 -p ombg${PREFIX}
+                       Should contain   ${result}   Successful executions: 1
diff --git a/hadoop-ozone/dist/src/main/smoketest/freon/validate.robot b/hadoop-ozone/dist/src/main/smoketest/freon/validate.robot
new file mode 100644
index 0000000..0689654
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/smoketest/freon/validate.robot
@@ -0,0 +1,27 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test freon data validation commands
+Resource            ../lib/os.robot
+Test Timeout        5 minutes
+
+*** Variables ***
+${PREFIX}    ${EMPTY}
+
+*** Test Cases ***
+Ozone Client Key Validator
+    ${result} =        Execute          ozone freon ockv ${OM_HA_PARAM} -t=1 -n=1 -p ockg${PREFIX}
+                       Should contain   ${result}   Successful executions: 1
diff --git a/hadoop-ozone/dist/src/main/smoketest/gdpr/gdpr.robot b/hadoop-ozone/dist/src/main/smoketest/gdpr/gdpr.robot
index 1c093ab..4487fa9 100644
--- a/hadoop-ozone/dist/src/main/smoketest/gdpr/gdpr.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/gdpr/gdpr.robot
@@ -46,7 +46,7 @@
 *** Keywords ***
 Test GDPR(disabled) without explicit options
     [arguments]     ${volume}
-                    Execute             ozone sh volume create /${volume} --space-quota 100TB --count-quota 100
+                    Execute             ozone sh volume create /${volume} --space-quota 100TB --namespace-quota 100
                     Execute             ozone sh bucket create /${volume}/mybucket1
     ${result} =     Execute             ozone sh bucket info /${volume}/mybucket1 | jq -r '. | select(.name=="mybucket1") | .metadata | .gdprEnabled'
                     Should Be Equal     ${result}       null
diff --git a/hadoop-ozone/dist/src/main/smoketest/mapreduce.robot b/hadoop-ozone/dist/src/main/smoketest/mapreduce.robot
index 9eff89e..c11695a 100644
--- a/hadoop-ozone/dist/src/main/smoketest/mapreduce.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/mapreduce.robot
@@ -41,7 +41,7 @@
 
 Execute WordCount
                     ${exampleJar}    Find example jar
-                    ${random}        Generate Random String  2   [NUMBERS]
+    ${random} =     Generate Random String
     ${root} =       Format FS URL    ${SCHEME}    ${volume}    ${bucket}
     ${dir} =        Format FS URL    ${SCHEME}    ${volume}    ${bucket}   input/
     ${result} =     Format FS URL    ${SCHEME}    ${volume}    ${bucket}   wordcount-${random}.txt
diff --git a/hadoop-ozone/dist/src/main/smoketest/ozonefs/ozonefs.robot b/hadoop-ozone/dist/src/main/smoketest/ozonefs/ozonefs.robot
index 450f1b6..453ba51 100644
--- a/hadoop-ozone/dist/src/main/smoketest/ozonefs/ozonefs.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/ozonefs/ozonefs.robot
@@ -78,7 +78,7 @@
                    Execute               ozone fs -cat ${DEEP_URL}/subdir1/NOTICE.txt
 
 Delete file
-                   Execute               ozone fs -rm ${DEEP_URL}/subdir1/NOTICE.txt
+                   Execute               ozone fs -rm -skipTrash ${DEEP_URL}/subdir1/NOTICE.txt
     ${result} =    Execute               ozone sh key list ${VOLUME}/${BUCKET} | jq -r '.name'
                    Should not contain    ${result}       NOTICE.txt
 
@@ -92,10 +92,19 @@
     ${result} =    Execute               ozone sh key list ${VOLUME}/${BUCKET} | jq -r '.name'
                    Should contain        ${result}       TOUCHFILE-${SCHEME}.txt
 
+Delete file with Trash
+                   Execute               ozone fs -touch ${DEEP_URL}/testFile.txt
+                   Execute               ozone fs -rm ${DEEP_URL}/testFile.txt
+    ${result} =    Execute               ozone fs -ls -R ${BASE_URL}/
+                   Should not contain    ${result}     ${DEEP_URL}/testFile.txt
+                   Should Contain Any    ${result}     .Trash/hadoop    .Trash/testuser/scm@EXAMPLE.COM    .Trash/root
+                   Should contain        ${result}     ${DEEP_DIR}/testFile.txt
+
 Delete recursively
-                   Execute               ozone fs -rm -r ${DEEP_URL}/
+                   Execute               ozone fs -mkdir -p ${DEEP_URL}/subdir2
+                   Execute               ozone fs -rm -skipTrash -r ${DEEP_URL}/subdir2
     ${result} =    Execute               ozone sh key list ${VOLUME}/${BUCKET} | jq -r '.name'
-                   Should not contain    ${result}       ${DEEP_DIR}
+                   Should not contain    ${result}       ${DEEP_DIR}/subdir2
 
 List recursively
     [Setup]        Setup localdir1
diff --git a/hadoop-ozone/dist/src/main/smoketest/security/admin-cert.robot b/hadoop-ozone/dist/src/main/smoketest/security/admin-cert.robot
new file mode 100644
index 0000000..1a214c9
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/smoketest/security/admin-cert.robot
@@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+*** Settings ***
+Documentation       Test for ozone admin cert command
+Library             BuiltIn
+Library             String
+Resource            ../commonlib.robot
+Resource            ../lib/os.robot
+Resource            ../ozone-lib/shell.robot
+Suite Setup         Setup Test
+Test Timeout        5 minutes
+
+*** Variables ***
+
+*** Keywords ***
+Setup Test
+    Run Keyword     Kinit test user     testuser     testuser.keytab
+
+*** Test Cases ***
+List valid certificates
+    ${output} =      Execute    ozone admin cert list
+                     Should Contain    ${output}    valid certificates
+
+List revoked certificates
+    ${output} =      Execute    ozone admin cert list -t revoked
+                     Should Contain    ${output}    Total 0 revoked certificates
+
+Info of the cert
+    ${output} =      Execute   for id in $(ozone admin cert list -c 1|grep UTC|awk '{print $1}'); do ozone admin cert info $id; done
+                     Should not Contain    ${output}    Certificate not found
+
diff --git a/hadoop-ozone/dist/src/main/smoketest/topology/cli.robot b/hadoop-ozone/dist/src/main/smoketest/topology/cli.robot
index 3f83ba3..bbe7a1b 100644
--- a/hadoop-ozone/dist/src/main/smoketest/topology/cli.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/topology/cli.robot
@@ -26,8 +26,8 @@
 *** Test Cases ***
 Run printTopology
     ${output} =         Execute          ozone admin printTopology
-                        Should contain   ${output}         10.5.0.7(ozone-topology_datanode_4_1.ozone-topology_net)    /rack2
+                        Should contain   ${output}         10.5.0.7(ozone-topology_datanode_4_1.ozone-topology_net)    IN_SERVICE    /rack2
 Run printTopology -o
     ${output} =         Execute          ozone admin printTopology -o
                         Should contain   ${output}         Location: /rack2
-                        Should contain   ${output}         10.5.0.7(ozone-topology_datanode_4_1.ozone-topology_net)
+                        Should contain   ${output}         10.5.0.7(ozone-topology_datanode_4_1.ozone-topology_net) IN_SERVICE
diff --git a/hadoop-ozone/dist/src/shell/ozone/ozone b/hadoop-ozone/dist/src/shell/ozone/ozone
index 8b7b1c6..60509bf 100755
--- a/hadoop-ozone/dist/src/shell/ozone/ozone
+++ b/hadoop-ozone/dist/src/shell/ozone/ozone
@@ -77,6 +77,14 @@
   if [ ! -f "${ozone_shell_log4j}" ]; then
     ozone_shell_log4j=${ozone_default_log4j}
   fi
+  # Add JVM parameter (org.apache.ratis.thirdparty.io.netty.allocator.useCacheForAllThreads=false)
+  # for disabling netty PooledByteBufAllocator thread caches for non-netty threads.
+  # This parameter significantly reduces GC pressure for Datanode.
+  # Corresponding Ratis issue https://issues.apache.org/jira/browse/RATIS-534.
+  # TODO: Fix the problem related to netty resource leak detector throwing
+  # exception as mentioned in HDDS-3812
+  RATIS_OPTS="-Dorg.apache.ratis.thirdparty.io.netty.allocator.useCacheForAllThreads=false ${RATIS_OPTS}"
+  RATIS_OPTS="-Dorg.apache.ratis.thirdparty.io.netty.leakDetection.level=disabled ${RATIS_OPTS}"
 
   case ${subcmd} in
     auditparser)
@@ -101,14 +109,9 @@
     ;;
     datanode)
       HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
-      # Add JVM parameter (org.apache.ratis.thirdparty.io.netty.allocator.useCacheForAllThreads=false)
-      # for disabling netty PooledByteBufAllocator thread caches for non-netty threads.
-      # This parameter significantly reduces GC pressure for Datanode.
-      # Corresponding Ratis issue https://issues.apache.org/jira/browse/RATIS-534.
-      # TODO: Fix the problem related to netty resource leak detector throwing
-      # exception as mentioned in HDDS-3812
       hadoop_deprecate_envvar HDDS_DN_OPTS OZONE_DATANODE_OPTS
-      OZONE_DATANODE_OPTS="-Dorg.apache.ratis.thirdparty.io.netty.allocator.useCacheForAllThreads=false -Dorg.apache.ratis.thirdparty.io.netty.leakDetection.level=disabled -Dlog4j.configurationFile=${HADOOP_CONF_DIR}/dn-audit-log4j2.properties ${OZONE_DATANODE_OPTS}"
+      OZONE_DATANODE_OPTS="${RATIS_OPTS} ${OZONE_DATANODE_OPTS}"
+      OZONE_DATANODE_OPTS="-Dlog4j.configurationFile=${HADOOP_CONF_DIR}/dn-audit-log4j2.properties ${OZONE_DATANODE_OPTS}"
       OZONE_DATANODE_OPTS="-Dlog4j2.contextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector ${OZONE_DATANODE_OPTS}"
       HADOOP_CLASSNAME=org.apache.hadoop.ozone.HddsDatanodeService
       OZONE_RUN_ARTIFACT_NAME="hadoop-ozone-datanode"
@@ -158,6 +161,7 @@
       HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
       HADOOP_CLASSNAME=org.apache.hadoop.ozone.om.OzoneManagerStarter
       hadoop_deprecate_envvar HDFS_OM_OPTS OZONE_OM_OPTS
+      OZONE_OM_OPTS="${RATIS_OPTS} ${OZONE_OM_OPTS}"
       OZONE_OM_OPTS="${OZONE_OM_OPTS} -Dlog4j.configurationFile=${HADOOP_CONF_DIR}/om-audit-log4j2.properties"
       OZONE_OM_OPTS="-Dlog4j2.contextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector ${OZONE_OM_OPTS}"
       OZONE_RUN_ARTIFACT_NAME="hadoop-ozone-ozone-manager"
@@ -177,6 +181,7 @@
       HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
       HADOOP_CLASSNAME='org.apache.hadoop.hdds.scm.server.StorageContainerManagerStarter'
       hadoop_deprecate_envvar HDFS_STORAGECONTAINERMANAGER_OPTS OZONE_SCM_OPTS
+      OZONE_SCM_OPTS="${RATIS_OPTS} ${OZONE_SCM_OPTS}"
       OZONE_SCM_OPTS="${OZONE_SCM_OPTS} -Dlog4j.configurationFile=${HADOOP_CONF_DIR}/scm-audit-log4j2.properties"
       OZONE_SCM_OPTS="-Dlog4j2.contextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector ${OZONE_SCM_OPTS}"
       OZONE_RUN_ARTIFACT_NAME="hadoop-hdds-server-scm"
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/contract/AbstractContractUnbufferTest.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/contract/AbstractContractUnbufferTest.java
new file mode 100644
index 0000000..809e6d1
--- /dev/null
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/contract/AbstractContractUnbufferTest.java
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *       http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.contract;
+
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.Path;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.createFile;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+
+/**
+ * Contract tests for {@link org.apache.hadoop.fs.CanUnbuffer#unbuffer}.
+ * Note: this is from Hadoop 3.3, can be removed after dependency upgrade.
+ */
+public abstract class AbstractContractUnbufferTest
+    extends AbstractFSContractTestBase {
+
+  private Path file;
+  private byte[] fileBytes;
+
+  private static final String SUPPORTS_UNBUFFER = "supports-unbuffer";
+
+  @Override
+  public void setup() throws Exception {
+    super.setup();
+    skipIfUnsupported(SUPPORTS_UNBUFFER);
+    file = path("unbufferFile");
+    fileBytes = dataset(TEST_FILE_LEN, 0, 255);
+    createFile(getFileSystem(), file, true, fileBytes);
+  }
+
+  @Test
+  public void testUnbufferAfterRead() throws IOException {
+    describe("unbuffer a file after a single read");
+    try (FSDataInputStream stream = getFileSystem().open(file)) {
+      validateFullFileContents(stream);
+      unbuffer(stream);
+    }
+  }
+
+  @Test
+  public void testUnbufferBeforeRead() throws IOException {
+    describe("unbuffer a file before a read");
+    try (FSDataInputStream stream = getFileSystem().open(file)) {
+      unbuffer(stream);
+      validateFullFileContents(stream);
+    }
+  }
+
+  @Test
+  public void testUnbufferEmptyFile() throws IOException {
+    Path emptyFile = path("emptyUnbufferFile");
+    getFileSystem().create(emptyFile, true).close();
+    describe("unbuffer an empty file");
+    try (FSDataInputStream stream = getFileSystem().open(emptyFile)) {
+      unbuffer(stream);
+    }
+  }
+
+  @Test
+  public void testUnbufferOnClosedFile() throws IOException {
+    describe("unbuffer a file after it is closed");
+    FSDataInputStream stream = null;
+    try {
+      stream = getFileSystem().open(file);
+      validateFullFileContents(stream);
+    } finally {
+      if (stream != null) {
+        stream.close();
+      }
+    }
+    if (stream != null) {
+      unbuffer(stream);
+    }
+  }
+
+  @Test
+  public void testMultipleUnbuffers() throws IOException {
+    describe("unbuffer a file multiple times");
+    try (FSDataInputStream stream = getFileSystem().open(file)) {
+      unbuffer(stream);
+      unbuffer(stream);
+      validateFullFileContents(stream);
+      unbuffer(stream);
+      unbuffer(stream);
+    }
+  }
+
+  @Test
+  public void testUnbufferMultipleReads() throws IOException {
+    describe("unbuffer a file multiple times");
+    try (FSDataInputStream stream = getFileSystem().open(file)) {
+      unbuffer(stream);
+      validateFileContents(stream, TEST_FILE_LEN / 8, 0);
+      unbuffer(stream);
+      validateFileContents(stream, TEST_FILE_LEN / 8, TEST_FILE_LEN / 8);
+      validateFileContents(stream, TEST_FILE_LEN / 4, TEST_FILE_LEN / 4);
+      unbuffer(stream);
+      validateFileContents(stream, TEST_FILE_LEN / 2, TEST_FILE_LEN / 2);
+      unbuffer(stream);
+      assertEquals("stream should be at end of file", TEST_FILE_LEN,
+              stream.getPos());
+    }
+  }
+
+  private void unbuffer(FSDataInputStream stream) throws IOException {
+    long pos = stream.getPos();
+    stream.unbuffer();
+    assertEquals("unbuffer unexpectedly changed the stream position", pos,
+            stream.getPos());
+  }
+
+  protected void validateFullFileContents(FSDataInputStream stream)
+          throws IOException {
+    validateFileContents(stream, TEST_FILE_LEN, 0);
+  }
+
+  protected void validateFileContents(FSDataInputStream stream, int length,
+                                      int startIndex)
+          throws IOException {
+    byte[] streamData = new byte[length];
+    assertEquals("failed to read expected number of bytes from "
+            + "stream. This may be transient",
+        length, stream.read(streamData));
+    byte[] validateFileBytes;
+    if (startIndex == 0 && length == fileBytes.length) {
+      validateFileBytes = fileBytes;
+    } else {
+      validateFileBytes = Arrays.copyOfRange(fileBytes, startIndex,
+              startIndex + length);
+    }
+    assertArrayEquals("invalid file contents", validateFileBytes, streamData);
+  }
+
+  protected Path getFile() {
+    return file;
+  }
+}
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java
index a72a257..19f7c15 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.fs.ozone;
 
+import org.apache.commons.io.IOUtils;
 import org.apache.commons.lang3.RandomStringUtils;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
@@ -36,6 +37,7 @@
 import org.apache.hadoop.ozone.om.OMConfigKeys;
 import org.apache.hadoop.ozone.om.exceptions.OMException;
 import org.apache.hadoop.ozone.om.helpers.OmMultipartInfo;
+import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Rule;
@@ -99,6 +101,13 @@
     o3fs = (OzoneFileSystem) FileSystem.get(new URI(rootPath), conf);
   }
 
+  @After
+  public void teardown() {
+    if (cluster != null) {
+      cluster.shutdown();
+    }
+    IOUtils.closeQuietly(o3fs);
+  }
 
   @Test
   public void test() throws Exception {
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
index bd8d45c..9310a32 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
@@ -37,6 +37,7 @@
 import org.apache.hadoop.fs.InvalidPathException;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.Trash;
+import org.apache.hadoop.fs.TrashPolicy;
 import org.apache.hadoop.fs.contract.ContractTestUtils;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
@@ -45,6 +46,7 @@
 import org.apache.hadoop.ozone.client.OzoneBucket;
 import org.apache.hadoop.ozone.client.OzoneKeyDetails;
 import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.TrashPolicyOzone;
 import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
 import org.apache.hadoop.ozone.om.helpers.OpenKeySession;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -85,11 +87,16 @@
 
   @Parameterized.Parameters
   public static Collection<Object[]> data() {
-    return Arrays.asList(new Object[]{true}, new Object[]{false});
+    return Arrays.asList(
+        new Object[]{true, true},
+        new Object[]{true, false},
+        new Object[]{false, true},
+        new Object[]{false, false});
   }
 
-  public TestOzoneFileSystem(boolean setDefaultFs) {
+  public TestOzoneFileSystem(boolean setDefaultFs, boolean enableOMRatis) {
     this.enabledFileSystemPaths = setDefaultFs;
+    this.omRatisEnabled = enableOMRatis;
   }
   /**
    * Set a timeout for each test.
@@ -101,6 +108,7 @@
       LoggerFactory.getLogger(TestOzoneFileSystem.class);
 
   private boolean enabledFileSystemPaths;
+  private boolean omRatisEnabled;
 
   private MiniOzoneCluster cluster;
   private FileSystem fs;
@@ -239,6 +247,9 @@
     testDeleteRoot();
 
     testRecursiveDelete();
+
+    // TODO: HDDS-4669: Fix testTrash to work when OM Ratis is enabled
+    // testTrash();
   }
 
   @After
@@ -252,9 +263,10 @@
   private void setupOzoneFileSystem()
       throws IOException, TimeoutException, InterruptedException {
     OzoneConfiguration conf = new OzoneConfiguration();
-    conf.setInt(FS_TRASH_INTERVAL_KEY, 1);
+    conf.setBoolean(OMConfigKeys.OZONE_OM_RATIS_ENABLE_KEY, omRatisEnabled);
     conf.setBoolean(OMConfigKeys.OZONE_OM_ENABLE_FILESYSTEM_PATHS,
         enabledFileSystemPaths);
+    conf.setInt(FS_TRASH_INTERVAL_KEY, 1);
     cluster = MiniOzoneCluster.newBuilder(conf)
         .setNumDatanodes(3)
         .build();
@@ -776,4 +788,58 @@
     // Cleanup
     o3fs.delete(trashRoot, true);
   }
+
+  /**
+   * 1.Move a Key to Trash
+   * 2.Verify that the key gets deleted by the trash emptier.
+   * @throws Exception
+   */
+
+  public void testTrash() throws Exception {
+    String testKeyName = "testKey2";
+    Path path = new Path(OZONE_URI_DELIMITER, testKeyName);
+    ContractTestUtils.touch(fs, path);
+    Assert.assertTrue(trash.getConf().getClass(
+        "fs.trash.classname", TrashPolicy.class).
+        isAssignableFrom(TrashPolicyOzone.class));
+    Assert.assertEquals(trash.getConf().getInt(FS_TRASH_INTERVAL_KEY, 0), 1);
+    // Call moveToTrash. We can't call protected fs.rename() directly
+    trash.moveToTrash(path);
+
+    // Construct paths
+    String username = UserGroupInformation.getCurrentUser().getShortUserName();
+    Path trashRoot = new Path(OZONE_URI_DELIMITER, TRASH_PREFIX);
+    Path userTrash = new Path(trashRoot, username);
+    Path userTrashCurrent = new Path(userTrash, "Current");
+    Path trashPath = new Path(userTrashCurrent, testKeyName);
+
+    // Wait until the TrashEmptier purges the key
+    GenericTestUtils.waitFor(()-> {
+      try {
+        return !o3fs.exists(trashPath);
+      } catch (IOException e) {
+        LOG.error("Delete from Trash Failed");
+        Assert.fail("Delete from Trash Failed");
+        return false;
+      }
+    }, 1000, 120000);
+
+    // userTrash path will contain the checkpoint folder
+    Assert.assertEquals(1, fs.listStatus(userTrash).length);
+
+    // wait for deletion of checkpoint dir
+    GenericTestUtils.waitFor(()-> {
+      try {
+        return o3fs.listStatus(userTrash).length==0;
+      } catch (IOException e) {
+        LOG.error("Delete from Trash Failed");
+        Assert.fail("Delete from Trash Failed");
+        return false;
+      }
+    }, 1000, 120000);
+
+    // Cleanup
+    fs.delete(trashRoot, true);
+
+  }
 }
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemMetrics.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemMetrics.java
new file mode 100644
index 0000000..9b619a5
--- /dev/null
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemMetrics.java
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.TestDataUtil;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.junit.Rule;
+import org.junit.BeforeClass;
+import org.junit.AfterClass;
+import org.junit.Test;
+import org.junit.Assert;
+import org.junit.rules.Timeout;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.hdds.StringUtils.string2Bytes;
+
+/**
+ * Test OM Metrics for OzoneFileSystem operations.
+ */
+public class TestOzoneFileSystemMetrics {
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(300000);
+  private static MiniOzoneCluster cluster = null;
+  private static FileSystem fs;
+  private static OzoneBucket bucket;
+
+  enum TestOps {
+    File,
+    Directory,
+    Key
+  }
+  /**
+   * Create a MiniDFSCluster for testing.
+   * <p>
+   * Ozone is made active by setting OZONE_ENABLED = true
+   *
+   * @throws IOException
+   */
+  @BeforeClass
+  public static void init() throws Exception {
+    OzoneConfiguration conf = new OzoneConfiguration();
+    conf.setBoolean(OMConfigKeys.OZONE_OM_ENABLE_FILESYSTEM_PATHS, true);
+    cluster = MiniOzoneCluster.newBuilder(conf)
+        .setNumDatanodes(3)
+        .setChunkSize(2) // MB
+        .setBlockSize(8) // MB
+        .setStreamBufferFlushSize(2) // MB
+        .setStreamBufferMaxSize(4) // MB
+        .build();
+    cluster.waitForClusterToBeReady();
+
+    // create a volume and a bucket to be used by OzoneFileSystem
+    bucket = TestDataUtil.createVolumeAndBucket(cluster);
+
+    // Set the fs.defaultFS and start the filesystem
+    String uri = String.format("%s://%s.%s/",
+        OzoneConsts.OZONE_URI_SCHEME, bucket.getName(), bucket.getVolumeName());
+    conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, uri);
+    fs =  FileSystem.get(conf);
+  }
+
+  /**
+   * Shutdown MiniDFSCluster.
+   */
+  @AfterClass
+  public static void shutdown() throws IOException {
+    fs.close();
+    cluster.shutdown();
+  }
+
+  @Test
+  public void testKeyOps() throws Exception {
+    testOzoneFileCommit(TestOps.Key);
+  }
+
+  @Test
+  public void testFileOps() throws Exception {
+    testOzoneFileCommit(TestOps.File);
+  }
+
+  @Test
+  public void testDirOps() throws Exception {
+    testOzoneFileCommit(TestOps.Directory);
+  }
+
+  private void testOzoneFileCommit(TestOps op) throws Exception {
+    long numKeysBeforeCreate = cluster
+        .getOzoneManager().getMetrics().getNumKeys();
+
+    int fileLen = 30 * 1024 * 1024;
+    byte[] data = string2Bytes(RandomStringUtils.randomAlphanumeric(fileLen));
+
+    Path parentDir = new Path("/" + RandomStringUtils.randomAlphanumeric(5));
+    Path filePath = new Path(parentDir,
+        RandomStringUtils.randomAlphanumeric(5));
+
+    switch (op) {
+    case Key:
+      try (OzoneOutputStream stream =
+               bucket.createKey(filePath.toString(), fileLen)) {
+        stream.write(data);
+      }
+      break;
+    case File:
+      try (FSDataOutputStream stream = fs.create(filePath)) {
+        stream.write(data);
+      }
+      break;
+    case Directory:
+      fs.mkdirs(filePath);
+      break;
+    default:
+      throw new IOException("Execution should never reach here." + op);
+    }
+
+    long numKeysAfterCommit = cluster
+        .getOzoneManager().getMetrics().getNumKeys();
+    Assert.assertTrue(numKeysAfterCommit > 0);
+    Assert.assertEquals(numKeysBeforeCreate + 2, numKeysAfterCommit);
+    fs.delete(parentDir, true);
+
+    long numKeysAfterDelete = cluster
+        .getOzoneManager().getMetrics().getNumKeys();
+    Assert.assertTrue(numKeysAfterDelete >= 0);
+    Assert.assertEquals(numKeysBeforeCreate, numKeysAfterDelete);
+  }
+}
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemMissingParent.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemMissingParent.java
new file mode 100644
index 0000000..cb59d2e
--- /dev/null
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemMissingParent.java
@@ -0,0 +1,127 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone;
+
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.TestDataUtil;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+
+public class TestOzoneFileSystemMissingParent {
+
+  private static OzoneConfiguration conf;
+  private static MiniOzoneCluster cluster;
+  private static Path bucketPath;
+  private static FileSystem fs;
+
+  @BeforeClass
+  public static void init() throws Exception {
+    conf = new OzoneConfiguration();
+    conf.setBoolean(OMConfigKeys.OZONE_OM_ENABLE_FILESYSTEM_PATHS, true);
+
+    cluster = MiniOzoneCluster.newBuilder(conf).setNumDatanodes(3).build();
+    cluster.waitForClusterToBeReady();
+
+    OzoneBucket bucket = TestDataUtil.createVolumeAndBucket(cluster);
+
+    String volumeName = bucket.getVolumeName();
+    Path volumePath = new Path(OZONE_URI_DELIMITER, volumeName);
+    String bucketName = bucket.getName();
+    bucketPath = new Path(volumePath, bucketName);
+
+    String rootPath = String
+        .format("%s://%s/", OzoneConsts.OZONE_OFS_URI_SCHEME,
+            conf.get(OZONE_OM_ADDRESS_KEY));
+
+    // Set the fs.defaultFS and create filesystem.
+    conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, rootPath);
+    fs = FileSystem.get(conf);
+  }
+
+  @After
+  public void cleanUp() throws Exception {
+    fs.delete(bucketPath, true);
+  }
+
+  @AfterClass
+  public static void tearDown() {
+    if (cluster != null) {
+      cluster.shutdown();
+      cluster = null;
+    }
+  }
+
+  /**
+   * Test if the parent directory gets deleted before commit.
+   */
+  @Test
+  public void testCloseFileWithDeletedParent() throws Exception {
+    // Test if the parent directory gets deleted before commit.
+    Path parent = new Path(bucketPath, "parent");
+    Path file = new Path(parent, "file");
+
+    // Create file with missing parent, this would create parent directory.
+    FSDataOutputStream stream = fs.create(file);
+
+    // Delete the parent.
+    fs.delete(parent, false);
+
+    // Close should throw exception, Since parent doesn't exist.
+    LambdaTestUtils.intercept(OMException.class,
+        "Cannot create file : parent/file " + "as parent "
+            + "directory doesn't exist", () -> stream.close());
+  }
+
+  /**
+   * Test if the parent directory gets renamed before commit.
+   */
+  @Test
+  public void testCloseFileWithRenamedParent() throws Exception {
+    Path parent = new Path(bucketPath, "parent");
+    Path file = new Path(parent, "file");
+
+    // Create file with missing parent, this would create parent directory.
+    FSDataOutputStream stream = fs.create(file);
+
+    // Rename the parent to some different path.
+    Path renamedPath = new Path(bucketPath, "parent1");
+    fs.rename(parent, renamedPath);
+
+    // Close should throw exception, Since parent has been moved.
+    LambdaTestUtils.intercept(OMException.class,
+        "Cannot create file : parent/file " + "as parent "
+            + "directory doesn't exist", () -> stream.close());
+  }
+}
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
index ab35bbb..5c51f0b 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
@@ -33,6 +33,7 @@
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OFSPath;
 import org.apache.hadoop.ozone.OzoneAcl;
 import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.TestDataUtil;
@@ -54,6 +55,7 @@
 import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.BeforeClass;
+import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.Timeout;
@@ -95,17 +97,24 @@
 
   @Parameterized.Parameters
   public static Collection<Object[]> data() {
-    return Arrays.asList(new Object[]{true}, new Object[]{false});
+    return Arrays.asList(
+        new Object[]{true, true},
+        new Object[]{true, false},
+        new Object[]{false, true},
+        new Object[]{false, false});
   }
 
-  public TestRootedOzoneFileSystem(boolean setDefaultFs) {
+  public TestRootedOzoneFileSystem(boolean setDefaultFs,
+      boolean enableOMRatis) {
     enabledFileSystemPaths = setDefaultFs;
+    omRatisEnabled = enableOMRatis;
   }
 
   @Rule
   public Timeout globalTimeout = new Timeout(300_000);
 
   private static boolean enabledFileSystemPaths;
+  private static boolean omRatisEnabled;
 
   private static OzoneConfiguration conf;
   private static MiniOzoneCluster cluster = null;
@@ -126,6 +135,7 @@
   public static void init() throws Exception {
     conf = new OzoneConfiguration();
     conf.setInt(FS_TRASH_INTERVAL_KEY, 1);
+    conf.setBoolean(OMConfigKeys.OZONE_OM_RATIS_ENABLE_KEY, omRatisEnabled);
     conf.setBoolean(OMConfigKeys.OZONE_OM_ENABLE_FILESYSTEM_PATHS,
         enabledFileSystemPaths);
     cluster = MiniOzoneCluster.newBuilder(conf)
@@ -824,13 +834,13 @@
     // Construct VolumeArgs
     VolumeArgs volumeArgs = new VolumeArgs.Builder()
         .setAcls(Collections.singletonList(aclWorldAccess))
-        .setQuotaInCounts(1000)
+        .setQuotaInNamespace(1000)
         .setQuotaInBytes(Long.MAX_VALUE).build();
     // Sanity check
     Assert.assertNull(volumeArgs.getOwner());
     Assert.assertNull(volumeArgs.getAdmin());
     Assert.assertEquals(Long.MAX_VALUE, volumeArgs.getQuotaInBytes());
-    Assert.assertEquals(1000, volumeArgs.getQuotaInCounts());
+    Assert.assertEquals(1000, volumeArgs.getQuotaInNamespace());
     Assert.assertEquals(0, volumeArgs.getMetadata().size());
     Assert.assertEquals(1, volumeArgs.getAcls().size());
     // Create volume "tmp" with world access. allow non-admin to create buckets
@@ -1183,6 +1193,7 @@
    * 2.Verify that the key gets deleted by the trash emptier.
    * @throws Exception
    */
+  @Ignore("HDDS-4669 : Fix testTrash to work when OM Ratis is enabled")
   @Test
   public void testTrash() throws Exception {
     String testKeyName = "keyToBeDeleted";
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractUnbuffer.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractUnbuffer.java
new file mode 100644
index 0000000..e40b22e
--- /dev/null
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/ITestOzoneContractUnbuffer.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone.contract;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractUnbufferTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+/**
+ * Ozone contract tests for {@link org.apache.hadoop.fs.CanUnbuffer#unbuffer}.
+ */
+public class ITestOzoneContractUnbuffer extends AbstractContractUnbufferTest {
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+    OzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+    OzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new OzoneContract(conf);
+  }
+}
\ No newline at end of file
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractUnbuffer.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractUnbuffer.java
new file mode 100644
index 0000000..e081e8d
--- /dev/null
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractUnbuffer.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractUnbufferTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+import java.io.IOException;
+
+/**
+ * Ozone contract tests for {@link org.apache.hadoop.fs.CanUnbuffer#unbuffer}.
+ */
+public class ITestRootedOzoneContractUnbuffer
+    extends AbstractContractUnbufferTest {
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+    RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() {
+    RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+    return new RootedOzoneContract(conf);
+  }
+}
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNodeFailure.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNodeFailure.java
index 6f58eae..954299a 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNodeFailure.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNodeFailure.java
@@ -21,6 +21,7 @@
 import org.apache.hadoop.hdds.HddsConfigKeys;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.hdds.conf.DatanodeRatisServerConfig;
@@ -65,6 +66,7 @@
     ratisServerConfig.setFollowerSlownessTimeout(Duration.ofSeconds(10));
     ratisServerConfig.setNoLeaderTimeout(Duration.ofMinutes(5));
     conf.setFromObject(ratisServerConfig);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
     conf.set(HddsConfigKeys.HDDS_PIPELINE_REPORT_INTERVAL, "2s");
 
     cluster = MiniOzoneCluster.newBuilder(conf)
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java
index 6236900..b16add0 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java
@@ -21,6 +21,7 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
 import org.apache.hadoop.ozone.HddsDatanodeService;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
@@ -156,7 +157,7 @@
     }
 
     if (cluster.getStorageContainerManager()
-        .getScmNodeManager().getNodeCount(HddsProtos.NodeState.HEALTHY) >=
+        .getScmNodeManager().getNodeCount(NodeStatus.inServiceHealthy()) >=
         HddsProtos.ReplicationFactor.THREE.getNumber()) {
       // make sure pipelines is created after node start
       pipelineManager.triggerPipelineCreation();
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMRestart.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMRestart.java
index 3f62ec3..3e8628f 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMRestart.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMRestart.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.hdds.scm.pipeline;
 
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
 import org.apache.hadoop.hdds.scm.container.ContainerManager;
 import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
@@ -69,6 +70,7 @@
     conf = new OzoneConfiguration();
     conf.setTimeDuration(HDDS_PIPELINE_REPORT_INTERVAL, 1000,
             TimeUnit.MILLISECONDS);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
     int numOfNodes = 4;
     cluster = MiniOzoneCluster.newBuilder(conf)
         .setNumDatanodes(numOfNodes)
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/upgrade/TestHDDSUpgrade.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/upgrade/TestHDDSUpgrade.java
index 07ba70f..dc8d1bb 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/upgrade/TestHDDSUpgrade.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/upgrade/TestHDDSUpgrade.java
@@ -43,6 +43,7 @@
 import org.apache.hadoop.hdds.scm.XceiverClientSpi;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
 import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
@@ -238,8 +239,13 @@
   private void testDataNodesStateOnSCM(NodeState state) {
     int countNodes = 0;
     for (DatanodeDetails dn : scm.getScmNodeManager().getAllNodes()){
-      Assert.assertEquals(state,
-          scm.getScmNodeManager().getNodeState(dn));
+      try {
+        Assert.assertEquals(state,
+            scm.getScmNodeManager().getNodeStatus(dn).getHealth());
+      } catch (NodeNotFoundException e) {
+        e.printStackTrace();
+        Assert.fail("Node not found");
+      }
       ++countNodes;
     }
     Assert.assertEquals(NUM_DATA_NODES, countNodes);
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
index 629ab5a..c955948 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
@@ -58,6 +58,7 @@
 import org.apache.hadoop.ozone.client.OzoneClientFactory;
 import org.apache.hadoop.ozone.common.Storage.StorageState;
 import org.apache.hadoop.ozone.container.common.utils.ContainerCache;
+import org.apache.hadoop.ozone.container.replication.ReplicationServer.ReplicationConfig;
 import org.apache.hadoop.ozone.om.OMConfigKeys;
 import org.apache.hadoop.ozone.om.OMStorage;
 import org.apache.hadoop.ozone.om.OzoneManager;
@@ -788,6 +789,8 @@
           randomContainerPort);
       conf.setBoolean(OzoneConfigKeys.DFS_CONTAINER_RATIS_IPC_RANDOM_PORT,
           randomContainerPort);
+
+      conf.setFromObject(new ReplicationConfig().setPort(0));
     }
 
     private void configureTrace() {
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneHAClusterImpl.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneHAClusterImpl.java
index 6953594..d16618a 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneHAClusterImpl.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneHAClusterImpl.java
@@ -171,7 +171,7 @@
   public OzoneManager getOMLeader() {
     OzoneManager res = null;
     for (OzoneManager ozoneManager : this.ozoneManagers) {
-      if (ozoneManager.isLeader()) {
+      if (ozoneManager.isLeaderReady()) {
         if (res != null) {
           // Found more than one leader
           // Return null, expect the caller to retry in a while
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java
index 46e3d67..0b68d4a 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneCluster.java
@@ -31,6 +31,7 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
@@ -84,6 +85,7 @@
   public static void setup() {
     conf = new OzoneConfiguration();
     conf.set(HddsConfigKeys.OZONE_METADATA_DIRS, TEST_ROOT.toString());
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
     conf.setBoolean(DFS_CONTAINER_RATIS_IPC_RANDOM_PORT, true);
     WRITE_TMP.mkdirs();
     READ_TMP.mkdirs();
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneHACluster.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneHACluster.java
index 96121af..051eb94 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneHACluster.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestMiniOzoneHACluster.java
@@ -107,6 +107,6 @@
     Assert.assertNotNull("Timed out waiting OM leader election to finish: "
             + "no leader or more than one leader.", ozoneManager);
     Assert.assertTrue("Should have gotten the leader!",
-        ozoneManager.get().isLeader());
+        ozoneManager.get().isLeaderReady());
   }
 }
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
index 16604f9..da2a63c 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
@@ -77,7 +77,8 @@
         ReconServerConfigKeys.RECON_OM_SNAPSHOT_TASK_INITIAL_DELAY,
         ReconServerConfigKeys.RECON_OM_SNAPSHOT_TASK_INTERVAL_DELAY,
         ReconServerConfigKeys.RECON_OM_SNAPSHOT_TASK_FLUSH_PARAM,
-        OMConfigKeys.OZONE_OM_RATIS_SNAPSHOT_AUTO_TRIGGER_THRESHOLD_KEY
+        OMConfigKeys.OZONE_OM_RATIS_SNAPSHOT_AUTO_TRIGGER_THRESHOLD_KEY,
+        OMConfigKeys.OZONE_OM_HA_PREFIX
         // TODO HDDS-2856
     ));
   }
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
index 035602c..071f8db 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
@@ -55,7 +55,6 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeType;
 import org.apache.hadoop.hdds.protocol.proto
     .StorageContainerDatanodeProtocolProtos;
@@ -75,6 +74,7 @@
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
 import org.apache.hadoop.hdds.scm.node.DatanodeInfo;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer;
 import org.apache.hadoop.hdds.scm.server.SCMStorageConfig;
 import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
@@ -279,8 +279,6 @@
 
       Map<Long, List<Long>> containerBlocks = createDeleteTXLog(delLog,
           keyLocations, helper);
-      Set<Long> containerIDs = containerBlocks.keySet();
-
       // Verify a few TX gets created in the TX log.
       Assert.assertTrue(delLog.getNumOfValidTransactions() > 0);
 
@@ -296,8 +294,7 @@
           return false;
         }
       }, 1000, 10000);
-      Assert.assertTrue(helper.getAllBlocks(containerIDs).isEmpty());
-
+      Assert.assertTrue(helper.verifyBlocksWithTxnTable(containerBlocks));
       // Continue the work, add some TXs that with known container names,
       // but unknown block IDs.
       for (Long containerID : containerBlocks.keySet()) {
@@ -387,8 +384,8 @@
             .setMetadataLayoutVersion(versionManager.getMetadataLayoutVersion())
             .build();
         List<SCMCommand> commands = nodeManager.processHeartbeat(
-            nodeManager.getNodes(NodeState.HEALTHY).get(0), layoutInfo);
-
+            nodeManager.getNodes(NodeStatus.inServiceHealthy()).get(0),
+            layoutInfo);
         if (commands != null) {
           for (SCMCommand cmd : commands) {
             if (cmd.getType() == SCMCommandProto.Type.deleteBlocksCommand) {
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
index c67fe30..fe0e075 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
@@ -17,6 +17,7 @@
 package org.apache.hadoop.ozone;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
@@ -32,9 +33,14 @@
 import org.apache.hadoop.ozone.container.common.utils.ReferenceCountedDB;
 import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
 import org.apache.hadoop.ozone.container.keyvalue.helpers.BlockUtils;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStore;
+import org.apache.hadoop.ozone.container.metadata.DatanodeStoreSchemaTwoImpl;
 import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
 import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.hdds.protocol.proto
+    .StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
+
 
 import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
@@ -137,6 +143,30 @@
     return allBlocks;
   }
 
+  public boolean verifyBlocksWithTxnTable(Map<Long, List<Long>> containerBlocks)
+      throws IOException {
+    Set<Long> containerIDs = containerBlocks.keySet();
+    for (Long entry : containerIDs) {
+      ReferenceCountedDB meta = getContainerMetadata(entry);
+      DatanodeStore ds = meta.getStore();
+      DatanodeStoreSchemaTwoImpl dnStoreTwoImpl =
+          (DatanodeStoreSchemaTwoImpl) ds;
+      List<? extends Table.KeyValue<Long, DeletedBlocksTransaction>>
+          txnsInTxnTable = dnStoreTwoImpl.getDeleteTransactionTable()
+          .getRangeKVs(null, Integer.MAX_VALUE, null);
+      List<Long> conID = new ArrayList<>();
+      for (Table.KeyValue<Long, DeletedBlocksTransaction> txn :
+          txnsInTxnTable) {
+        conID.addAll(txn.getValue().getLocalIDList());
+      }
+      if (!conID.equals(containerBlocks.get(entry))) {
+        return false;
+      }
+      meta.close();
+    }
+    return true;
+  }
+
   private ReferenceCountedDB getContainerMetadata(Long containerID)
       throws IOException {
     ContainerWithPipeline containerWithPipeline = cluster
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestBlockOutputStreamWithFailures.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestBlockOutputStreamWithFailures.java
index b040405..46d48ae 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestBlockOutputStreamWithFailures.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestBlockOutputStreamWithFailures.java
@@ -108,7 +108,7 @@
     conf.setQuietMode(false);
     conf.setStorageSize(OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE, 4,
         StorageUnit.MB);
-    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 3);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 2);
     DatanodeRatisServerConfig ratisServerConfig =
         conf.getObject(DatanodeRatisServerConfig.class);
     ratisServerConfig.setRequestTimeOut(Duration.ofSeconds(3));
@@ -321,7 +321,6 @@
 
     key.flush();
 
-    Assert.assertEquals(2, keyOutputStream.getStreamEntries().size());
     // now close the stream, It will update the ack length after watchForCommit
     key.close();
     // Make sure the retryCount is reset after the exception is handled
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
index 12ba4e6..116c0cf 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
@@ -29,6 +29,7 @@
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.OzoneClientConfig;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.ozone.OzoneConsts;
@@ -95,6 +96,7 @@
 
     conf.setTimeDuration(HDDS_SCM_WATCHER_TIMEOUT, 1000, TimeUnit.MILLISECONDS);
     conf.setTimeDuration(OZONE_SCM_STALENODE_INTERVAL, 3, TimeUnit.SECONDS);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
     conf.setQuietMode(false);
     conf.setStorageSize(OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE, 4,
         StorageUnit.MB);
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachine.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachine.java
index 3dfddfe..fbd9bec 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachine.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachine.java
@@ -30,6 +30,7 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
 import org.apache.hadoop.hdds.scm.OzoneClientConfig;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.ozone.client.CertificateClientTestImpl;
@@ -89,6 +90,7 @@
     File baseDir = new File(path);
     baseDir.mkdirs();
 
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
     conf.setBoolean(HDDS_BLOCK_TOKEN_ENABLED, true);
     //  conf.setBoolean(OZONE_SECURITY_ENABLED_KEY, true);
     conf.setTimeDuration(HDDS_CONTAINER_REPORT_INTERVAL, 200,
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDeleteWithSlowFollower.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDeleteWithSlowFollower.java
index 12c6d62..044ac91 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDeleteWithSlowFollower.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDeleteWithSlowFollower.java
@@ -102,6 +102,7 @@
 
     conf.setTimeDuration(HDDS_CONTAINER_REPORT_INTERVAL, 200,
         TimeUnit.MILLISECONDS);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
     // Make the stale, dead and server failure timeout higher so that a dead
     // node is not detecte at SCM as well as the pipeline close action
     // never gets initiated early at Datanode in the test.
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDiscardPreallocatedBlocks.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDiscardPreallocatedBlocks.java
index 061c5e1..612c522 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDiscardPreallocatedBlocks.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestDiscardPreallocatedBlocks.java
@@ -99,6 +99,7 @@
     conf.setQuietMode(false);
     conf.setStorageSize(OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE, 4,
         StorageUnit.MB);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
     conf.setInt(ScmConfigKeys.OZONE_SCM_PIPELINE_OWNER_CONTAINER_COUNT, 1);
     cluster = MiniOzoneCluster.newBuilder(conf).setNumDatanodes(3).build();
     cluster.waitForClusterToBeReady();
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestKeyInputStream.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestKeyInputStream.java
index 4dbb0b6..2cb352d 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestKeyInputStream.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestKeyInputStream.java
@@ -18,20 +18,30 @@
 package org.apache.hadoop.ozone.client.rpc;
 
 import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.time.Duration;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.List;
 import java.util.Random;
 import java.util.UUID;
 import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
 
 import org.apache.hadoop.conf.StorageUnit;
 import org.apache.hadoop.hdds.client.ReplicationType;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.scm.OzoneClientConfig;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.XceiverClientManager;
 import org.apache.hadoop.hdds.scm.XceiverClientMetrics;
+import org.apache.hadoop.hdds.scm.container.ReplicationManager.ReplicationManagerConfiguration;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.ozone.client.ObjectStore;
@@ -45,21 +55,39 @@
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.HDDS_SCM_WATCHER_TIMEOUT;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DEADNODE_INTERVAL;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_STALENODE_INTERVAL;
+import static org.apache.hadoop.ozone.container.TestHelper.countReplicas;
+import static org.junit.Assert.fail;
+
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.After;
 import org.junit.Assert;
+import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.Timeout;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Tests {@link KeyInputStream}.
  */
 @RunWith(Parameterized.class)
 public class TestKeyInputStream {
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestKeyInputStream.class);
+
+  private static final int TIMEOUT = 300_000;
+
   private static MiniOzoneCluster cluster;
   private static OzoneConfiguration conf = new OzoneConfiguration();
   private static OzoneClient client;
@@ -105,12 +133,20 @@
 
     conf.setTimeDuration(HDDS_SCM_WATCHER_TIMEOUT, 1000, TimeUnit.MILLISECONDS);
     conf.setTimeDuration(OZONE_SCM_STALENODE_INTERVAL, 3, TimeUnit.SECONDS);
+    conf.setTimeDuration(OZONE_SCM_DEADNODE_INTERVAL, 6, TimeUnit.SECONDS);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
     conf.setQuietMode(false);
     conf.setStorageSize(OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE, 64,
         StorageUnit.MB);
     conf.set(ScmConfigKeys.OZONE_SCM_CHUNK_LAYOUT_KEY, chunkLayout.name());
+
+    ReplicationManagerConfiguration repConf =
+        conf.getObject(ReplicationManagerConfiguration.class);
+    repConf.setInterval(Duration.ofSeconds(1));
+    conf.setFromObject(repConf);
+
     cluster = MiniOzoneCluster.newBuilder(conf)
-        .setNumDatanodes(3)
+        .setNumDatanodes(4)
         .setTotalPipelineNumLimit(5)
         .setBlockSize(blockSize)
         .setChunkSize(chunkSize)
@@ -130,7 +166,7 @@
   }
 
   @Rule
-  public Timeout timeout = new Timeout(300_000);
+  public Timeout timeout = new Timeout(TIMEOUT);
 
   /**
    * Shutdown MiniDFSCluster.
@@ -156,12 +192,7 @@
     // write data of more than 2 blocks.
     int dataLength = (2 * blockSize) + (chunkSize);
 
-    Random rd = new Random();
-    byte[] inputData = new byte[dataLength];
-    rd.nextBytes(inputData);
-    key.write(inputData);
-    key.close();
-
+    byte[] inputData = writeRandomBytes(key, dataLength);
 
     KeyInputStream keyInputStream = (KeyInputStream) objectStore
         .getVolume(volumeName).getBucket(bucketName).readKey(keyName)
@@ -304,40 +335,23 @@
     OzoneOutputStream key = TestHelper.createKey(keyName,
         ReplicationType.RATIS, 0, objectStore, volumeName, bucketName);
 
-    // write data spanning multiple chunks
+    // write data spanning multiple blocks/chunks
     int dataLength = 2 * blockSize + (blockSize / 2);
-    byte[] originData = new byte[dataLength];
-    Random r = new Random();
-    r.nextBytes(originData);
-    key.write(originData);
-    key.close();
+    byte[] data = writeRandomBytes(key, dataLength);
 
     // read chunk data
-    KeyInputStream keyInputStream = (KeyInputStream) objectStore
+    try (KeyInputStream keyInputStream = (KeyInputStream) objectStore
         .getVolume(volumeName).getBucket(bucketName).readKey(keyName)
-        .getInputStream();
+        .getInputStream()) {
 
-    int[] bufferSizeList = {chunkSize / 4, chunkSize / 2, chunkSize - 1,
-        chunkSize, chunkSize + 1, blockSize - 1, blockSize, blockSize + 1,
-        blockSize * 2};
-    for (int bufferSize : bufferSizeList) {
-      byte[] data = new byte[bufferSize];
-      int totalRead = 0;
-      while (totalRead < dataLength) {
-        int numBytesRead = keyInputStream.read(data);
-        if (numBytesRead == -1 || numBytesRead == 0) {
-          break;
-        }
-        byte[] tmp1 =
-            Arrays.copyOfRange(originData, totalRead, totalRead + numBytesRead);
-        byte[] tmp2 =
-            Arrays.copyOfRange(data, 0, numBytesRead);
-        Assert.assertArrayEquals(tmp1, tmp2);
-        totalRead += numBytesRead;
+      int[] bufferSizeList = {chunkSize / 4, chunkSize / 2, chunkSize - 1,
+          chunkSize, chunkSize + 1, blockSize - 1, blockSize, blockSize + 1,
+          blockSize * 2};
+      for (int bufferSize : bufferSizeList) {
+        assertReadFully(data, keyInputStream, bufferSize, 0);
+        keyInputStream.seek(0);
       }
-      keyInputStream.seek(0);
     }
-    keyInputStream.close();
   }
 
   @Test
@@ -397,4 +411,114 @@
       Assert.assertEquals(inputData[chunkSize + 50 + i], readData[i]);
     }
   }
+
+  @Test
+  public void readAfterReplication() throws Exception {
+    testReadAfterReplication(false);
+  }
+
+  @Test
+  public void readAfterReplicationWithUnbuffering() throws Exception {
+    testReadAfterReplication(true);
+  }
+
+  private void testReadAfterReplication(boolean doUnbuffer) throws Exception {
+    Assume.assumeTrue(cluster.getHddsDatanodes().size() > 3);
+
+    int dataLength = 2 * chunkSize;
+    String keyName = getKeyName();
+    OzoneOutputStream key = TestHelper.createKey(keyName,
+        ReplicationType.RATIS, dataLength, objectStore, volumeName, bucketName);
+
+    byte[] data = writeRandomBytes(key, dataLength);
+
+    OmKeyArgs keyArgs = new OmKeyArgs.Builder().setVolumeName(volumeName)
+        .setBucketName(bucketName)
+        .setKeyName(keyName)
+        .setType(HddsProtos.ReplicationType.RATIS)
+        .setFactor(HddsProtos.ReplicationFactor.THREE)
+        .build();
+    OmKeyInfo keyInfo = cluster.getOzoneManager().lookupKey(keyArgs);
+
+    OmKeyLocationInfoGroup locations = keyInfo.getLatestVersionLocations();
+    Assert.assertNotNull(locations);
+    List<OmKeyLocationInfo> locationInfoList = locations.getLocationList();
+    Assert.assertEquals(1, locationInfoList.size());
+    OmKeyLocationInfo loc = locationInfoList.get(0);
+    long containerID = loc.getContainerID();
+    Assert.assertEquals(3, countReplicas(containerID, cluster));
+
+    TestHelper.waitForContainerClose(cluster, containerID);
+
+    List<DatanodeDetails> pipelineNodes = loc.getPipeline().getNodes();
+
+    // read chunk data
+    try (KeyInputStream keyInputStream = (KeyInputStream) objectStore
+        .getVolume(volumeName).getBucket(bucketName)
+        .readKey(keyName).getInputStream()) {
+
+      int b = keyInputStream.read();
+      Assert.assertNotEquals(-1, b);
+
+      if (doUnbuffer) {
+        keyInputStream.unbuffer();
+      }
+
+      cluster.shutdownHddsDatanode(pipelineNodes.get(0));
+
+      // check that we can still read it
+      assertReadFully(data, keyInputStream, dataLength - 1, 1);
+    }
+  }
+
+  private static void waitForNodeToBecomeDead(
+      DatanodeDetails datanode) throws TimeoutException, InterruptedException {
+    GenericTestUtils.waitFor(() ->
+        HddsProtos.NodeState.DEAD == getNodeHealth(datanode),
+        100, 30000);
+    LOG.info("Node {} is {}", datanode.getUuidString(),
+        getNodeHealth(datanode));
+  }
+
+  private static HddsProtos.NodeState getNodeHealth(DatanodeDetails dn) {
+    HddsProtos.NodeState health = null;
+    try {
+      NodeManager nodeManager =
+          cluster.getStorageContainerManager().getScmNodeManager();
+      health = nodeManager.getNodeStatus(dn).getHealth();
+    } catch (NodeNotFoundException e) {
+      fail("Unexpected NodeNotFound exception");
+    }
+    return health;
+  }
+
+  private byte[] writeRandomBytes(OutputStream key, int size)
+      throws IOException {
+    byte[] data = new byte[size];
+    Random r = new Random();
+    r.nextBytes(data);
+    key.write(data);
+    key.close();
+    return data;
+  }
+
+  private static void assertReadFully(byte[] data, InputStream in,
+      int bufferSize, int totalRead) throws IOException {
+
+    byte[] buffer = new byte[bufferSize];
+    while (totalRead < data.length) {
+      int numBytesRead = in.read(buffer);
+      if (numBytesRead == -1 || numBytesRead == 0) {
+        break;
+      }
+      byte[] tmp1 =
+          Arrays.copyOfRange(data, totalRead, totalRead + numBytesRead);
+      byte[] tmp2 =
+          Arrays.copyOfRange(buffer, 0, numBytesRead);
+      Assert.assertArrayEquals(tmp1, tmp2);
+      totalRead += numBytesRead;
+    }
+    Assert.assertEquals(data.length, totalRead);
+  }
+
 }
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneAtRestEncryption.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneAtRestEncryption.java
index 324db98..7aced89 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneAtRestEncryption.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneAtRestEncryption.java
@@ -19,6 +19,7 @@
 
 import java.io.File;
 import java.io.IOException;
+import java.nio.charset.StandardCharsets;
 import java.security.NoSuchAlgorithmException;
 import java.time.Instant;
 import java.util.HashMap;
@@ -177,9 +178,10 @@
       String keyName = UUID.randomUUID().toString();
 
       try (OzoneOutputStream out = bucket.createKey(keyName,
-          value.getBytes("UTF-8").length, ReplicationType.STAND_ALONE,
+          value.getBytes(StandardCharsets.UTF_8).length,
+          ReplicationType.STAND_ALONE,
           ReplicationFactor.ONE, new HashMap<>())) {
-        out.write(value.getBytes("UTF-8"));
+        out.write(value.getBytes(StandardCharsets.UTF_8));
       }
 
       OzoneKey key = bucket.getKey(keyName);
@@ -188,7 +190,7 @@
       int len = 0;
 
       try(OzoneInputStream is = bucket.readKey(keyName)) {
-        fileContent = new byte[value.getBytes("UTF-8").length];
+        fileContent = new byte[value.getBytes(StandardCharsets.UTF_8).length];
         len = is.read(fileContent);
       }
 
@@ -196,7 +198,8 @@
       Assert.assertTrue(verifyRatisReplication(volumeName, bucketName,
           keyName, ReplicationType.STAND_ALONE,
           ReplicationFactor.ONE));
-      Assert.assertEquals(value, new String(fileContent, "UTF-8"));
+      Assert.assertEquals(value, new String(fileContent,
+          StandardCharsets.UTF_8));
       Assert.assertFalse(key.getCreationTime().isBefore(testStartTime));
       Assert.assertFalse(key.getModificationTime().isBefore(testStartTime));
     }
@@ -235,9 +238,10 @@
     Map<String, String> keyMetadata = new HashMap<>();
     keyMetadata.put(OzoneConsts.GDPR_FLAG, "true");
     try (OzoneOutputStream out = bucket.createKey(keyName,
-        value.getBytes("UTF-8").length, ReplicationType.STAND_ALONE,
+        value.getBytes(StandardCharsets.UTF_8).length,
+        ReplicationType.STAND_ALONE,
         ReplicationFactor.ONE, keyMetadata)) {
-      out.write(value.getBytes("UTF-8"));
+      out.write(value.getBytes(StandardCharsets.UTF_8));
     }
 
     OzoneKeyDetails key = bucket.getKey(keyName);
@@ -246,7 +250,7 @@
     int len = 0;
 
     try(OzoneInputStream is = bucket.readKey(keyName)) {
-      fileContent = new byte[value.getBytes("UTF-8").length];
+      fileContent = new byte[value.getBytes(StandardCharsets.UTF_8).length];
       len = is.read(fileContent);
     }
 
@@ -254,7 +258,7 @@
     Assert.assertTrue(verifyRatisReplication(volumeName, bucketName,
         keyName, ReplicationType.STAND_ALONE,
         ReplicationFactor.ONE));
-    Assert.assertEquals(value, new String(fileContent, "UTF-8"));
+    Assert.assertEquals(value, new String(fileContent, StandardCharsets.UTF_8));
     Assert.assertFalse(key.getCreationTime().isBefore(testStartTime));
     Assert.assertFalse(key.getModificationTime().isBefore(testStartTime));
     Assert.assertEquals("true", key.getMetadata().get(OzoneConsts.GDPR_FLAG));
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneClientRetriesOnException.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneClientRetriesOnException.java
index 6b1a80a..af3ec90 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneClientRetriesOnException.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneClientRetriesOnException.java
@@ -53,6 +53,7 @@
 import org.apache.ratis.protocol.exceptions.GroupMismatchException;
 import org.junit.After;
 import org.junit.Assert;
+import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Rule;
 import org.junit.Test;
@@ -63,6 +64,8 @@
  */
 public class TestOzoneClientRetriesOnException {
 
+  private static final int MAX_RETRIES = 3;
+
   /**
     * Set a timeout for each test.
     */
@@ -97,7 +100,7 @@
     blockSize = 2 * maxFlushSize;
 
     OzoneClientConfig clientConfig = conf.getObject(OzoneClientConfig.class);
-    clientConfig.setMaxRetryCount(3);
+    clientConfig.setMaxRetryCount(MAX_RETRIES);
     clientConfig.setChecksumType(ChecksumType.NONE);
     clientConfig.setStreamBufferFlushDelay(false);
     conf.setFromObject(clientConfig);
@@ -189,11 +192,12 @@
   public void testMaxRetriesByOzoneClient() throws Exception {
     String keyName = getKeyName();
     OzoneOutputStream key =
-        createKey(keyName, ReplicationType.RATIS, 4 * blockSize);
+        createKey(keyName, ReplicationType.RATIS, (MAX_RETRIES+1) * blockSize);
     Assert.assertTrue(key.getOutputStream() instanceof KeyOutputStream);
     KeyOutputStream keyOutputStream = (KeyOutputStream) key.getOutputStream();
     List<BlockOutputStreamEntry> entries = keyOutputStream.getStreamEntries();
-    Assert.assertTrue(keyOutputStream.getStreamEntries().size() == 4);
+    Assert.assertEquals((MAX_RETRIES + 1),
+        keyOutputStream.getStreamEntries().size());
     int dataLength = maxFlushSize + 50;
     // write data more than 1 chunk
     byte[] data1 =
@@ -211,11 +215,10 @@
               .getPipeline(container.getPipelineID());
       XceiverClientSpi xceiverClient =
           xceiverClientManager.acquireClient(pipeline);
-      if (!containerList.contains(containerID)) {
-        containerList.add(containerID);
-        xceiverClient.sendCommand(ContainerTestHelper
-            .getCreateContainerRequest(containerID, pipeline));
-      }
+      Assume.assumeFalse(containerList.contains(containerID));
+      containerList.add(containerID);
+      xceiverClient.sendCommand(ContainerTestHelper
+          .getCreateContainerRequest(containerID, pipeline));
       xceiverClientManager.releaseClient(xceiverClient, false);
     }
     key.write(data1);
@@ -223,11 +226,12 @@
     Assert.assertTrue(stream instanceof BlockOutputStream);
     BlockOutputStream blockOutputStream = (BlockOutputStream) stream;
     TestHelper.waitForContainerClose(key, cluster);
-    // Ensure that blocks for the key have been allocated to atleast 3 different
-    // containers so that write request will be tried on 3 different blocks
-    // of 3 different containers and it will finally fail as it will hit
-    // the max retry count of 3.
-    Assert.assertTrue(containerList.size() >= 3);
+    // Ensure that blocks for the key have been allocated to at least N+1
+    // containers so that write request will be tried on N+1 different blocks
+    // of N+1 different containers and it will finally fail as it will hit
+    // the max retry count of N.
+    Assume.assumeTrue(containerList.size() + " <= " + MAX_RETRIES,
+        containerList.size() > MAX_RETRIES);
     try {
       key.write(data1);
       // ensure that write is flushed to dn
@@ -240,7 +244,7 @@
               getMessage().contains(
               "Retry request failed. " +
                       "retries get failed due to exceeded maximum " +
-                      "allowed retries number: 3"));
+                      "allowed retries number: " + MAX_RETRIES));
     }
     try {
       key.flush();
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
index 88222f1..2dbb73a 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
@@ -116,6 +116,7 @@
 import static org.apache.hadoop.ozone.OzoneAcl.AclScope.DEFAULT;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE_DEFAULT;
+import static org.apache.hadoop.ozone.OzoneConsts.DEFAULT_OM_UPDATE_ID;
 import static org.apache.hadoop.ozone.OzoneConsts.GB;
 import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.KEY_NOT_FOUND;
 import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.NO_SUCH_MULTIPART_UPLOAD_ERROR;
@@ -124,6 +125,8 @@
 import static org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLIdentityType.USER;
 import static org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.READ;
 import org.junit.Assert;
+
+import static org.apache.hadoop.ozone.security.acl.IAccessAuthorizer.ACLType.WRITE;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
@@ -260,7 +263,7 @@
         cluster.getOzoneManager().getMetadataManager().getVolumeTable().get(
             omMetadataManager.getVolumeKey(s3VolumeName));
     Assert.assertEquals(objectID, omVolumeArgs.getObjectID());
-    Assert.assertEquals(transactionID, omVolumeArgs.getUpdateID());
+    Assert.assertEquals(DEFAULT_OM_UPDATE_ID, omVolumeArgs.getUpdateID());
   }
 
   @Test
@@ -277,9 +280,11 @@
   }
 
   @Test
-  public void testSetAndClrQuota() throws IOException {
+  public void testSetAndClrQuota() throws Exception {
     String volumeName = UUID.randomUUID().toString();
     String bucketName = UUID.randomUUID().toString();
+    String value = "sample value";
+    int valueLength = value.getBytes().length;
     OzoneVolume volume = null;
     store.createVolume(volumeName);
 
@@ -287,41 +292,61 @@
         "0GB", 0L));
     volume = store.getVolume(volumeName);
     Assert.assertEquals(OzoneConsts.QUOTA_RESET, volume.getQuotaInBytes());
-    Assert.assertEquals(OzoneConsts.QUOTA_RESET, volume.getQuotaInCounts());
+    Assert.assertEquals(OzoneConsts.QUOTA_RESET, volume.getQuotaInNamespace());
 
     store.getVolume(volumeName).setQuota(OzoneQuota.parseQuota(
         "10GB", 10000L));
     store.getVolume(volumeName).createBucket(bucketName);
     volume = store.getVolume(volumeName);
     Assert.assertEquals(10 * GB, volume.getQuotaInBytes());
-    Assert.assertEquals(10000L, volume.getQuotaInCounts());
+    Assert.assertEquals(10000L, volume.getQuotaInNamespace());
     OzoneBucket bucket = store.getVolume(volumeName).getBucket(bucketName);
     Assert.assertEquals(OzoneConsts.QUOTA_RESET, bucket.getQuotaInBytes());
-    Assert.assertEquals(OzoneConsts.QUOTA_RESET, bucket.getQuotaInCounts());
+    Assert.assertEquals(OzoneConsts.QUOTA_RESET, bucket.getQuotaInNamespace());
 
     store.getVolume(volumeName).getBucket(bucketName).setQuota(
         OzoneQuota.parseQuota("0GB", 0));
     Assert.assertEquals(OzoneConsts.QUOTA_RESET, bucket.getQuotaInBytes());
-    Assert.assertEquals(OzoneConsts.QUOTA_RESET, bucket.getQuotaInCounts());
+    Assert.assertEquals(OzoneConsts.QUOTA_RESET, bucket.getQuotaInNamespace());
 
     store.getVolume(volumeName).getBucket(bucketName).setQuota(
         OzoneQuota.parseQuota("1GB", 1000L));
     OzoneBucket ozoneBucket = store.getVolume(volumeName).getBucket(bucketName);
     Assert.assertEquals(1024 * 1024 * 1024,
         ozoneBucket.getQuotaInBytes());
-    Assert.assertEquals(1000L, ozoneBucket.getQuotaInCounts());
+    Assert.assertEquals(1000L, ozoneBucket.getQuotaInNamespace());
+
+    LambdaTestUtils.intercept(IOException.class, "Can not clear bucket" +
+        " spaceQuota because volume spaceQuota is not cleared.",
+        () -> ozoneBucket.clearSpaceQuota());
+
+    writeKey(bucket, UUID.randomUUID().toString(), ONE, value, valueLength);
+    Assert.assertEquals(1L,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedNamespace());
+    Assert.assertEquals(valueLength,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedBytes());
+    Assert.assertEquals(1L,
+        store.getVolume(volumeName).getUsedNamespace());
 
     store.getVolume(volumeName).clearSpaceQuota();
-    store.getVolume(volumeName).clearCountQuota();
+    store.getVolume(volumeName).clearNamespaceQuota();
     OzoneVolume clrVolume = store.getVolume(volumeName);
     Assert.assertEquals(OzoneConsts.QUOTA_RESET, clrVolume.getQuotaInBytes());
-    Assert.assertEquals(OzoneConsts.QUOTA_RESET, clrVolume.getQuotaInCounts());
+    Assert.assertEquals(OzoneConsts.QUOTA_RESET,
+        clrVolume.getQuotaInNamespace());
 
     ozoneBucket.clearSpaceQuota();
-    ozoneBucket.clearCountQuota();
+    ozoneBucket.clearNamespaceQuota();
     OzoneBucket clrBucket = store.getVolume(volumeName).getBucket(bucketName);
     Assert.assertEquals(OzoneConsts.QUOTA_RESET, clrBucket.getQuotaInBytes());
-    Assert.assertEquals(OzoneConsts.QUOTA_RESET, clrBucket.getQuotaInCounts());
+    Assert.assertEquals(OzoneConsts.QUOTA_RESET,
+        clrBucket.getQuotaInNamespace());
+    Assert.assertEquals(1L,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedNamespace());
+    Assert.assertEquals(valueLength,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedBytes());
+    Assert.assertEquals(1L,
+        store.getVolume(volumeName).getUsedNamespace());
   }
 
   @Test
@@ -382,12 +407,12 @@
     Assert.assertEquals(OzoneConsts.QUOTA_RESET,
         store.getVolume(volumeName).getQuotaInBytes());
     Assert.assertEquals(OzoneConsts.QUOTA_RESET,
-        store.getVolume(volumeName).getQuotaInCounts());
+        store.getVolume(volumeName).getQuotaInNamespace());
     store.getVolume(volumeName).setQuota(OzoneQuota.parseQuota("1GB", 1000L));
     OzoneVolume volume = store.getVolume(volumeName);
     Assert.assertEquals(1024 * 1024 * 1024,
         volume.getQuotaInBytes());
-    Assert.assertEquals(1000L, volume.getQuotaInCounts());
+    Assert.assertEquals(1000L, volume.getQuotaInNamespace());
   }
 
   @Test
@@ -882,9 +907,113 @@
     Assert.assertEquals(4 * blockSize,
         store.getVolume(volumeName).getBucket(bucketName).getUsedBytes());
 
+    // Reset bucket quota, the original usedBytes needs to remain the same
+    bucket.setQuota(OzoneQuota.parseQuota(
+        100 + " GB", 100));
+    Assert.assertEquals(4 * blockSize,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedBytes());
+
     Assert.assertEquals(3, countException);
   }
 
+  @Test
+  public void testVolumeUsedNamespace() throws IOException {
+    String volumeName = UUID.randomUUID().toString();
+    String bucketName = UUID.randomUUID().toString();
+    String bucketName2 = UUID.randomUUID().toString();
+    OzoneVolume volume = null;
+
+    // set Volume namespace quota as 1
+    store.createVolume(volumeName,
+        VolumeArgs.newBuilder().setQuotaInNamespace(1L).build());
+    volume = store.getVolume(volumeName);
+    // The initial value should be 0
+    Assert.assertEquals(0L, volume.getUsedNamespace());
+    volume.createBucket(bucketName);
+    // Used namespace should be 1
+    volume = store.getVolume(volumeName);
+    Assert.assertEquals(1L, volume.getUsedNamespace());
+
+    try {
+      volume.createBucket(bucketName2);
+    } catch (IOException ex) {
+      GenericTestUtils.assertExceptionContains("QUOTA_EXCEEDED", ex);
+    }
+
+    // test linked bucket
+    String targetVolName = UUID.randomUUID().toString();
+    store.createVolume(targetVolName);
+    OzoneVolume volumeWithLinkedBucket = store.getVolume(targetVolName);
+    String targetBucketName = UUID.randomUUID().toString();
+    BucketArgs.Builder argsBuilder = new BucketArgs.Builder()
+        .setStorageType(StorageType.DEFAULT)
+        .setVersioning(false)
+        .setSourceVolume(volumeName)
+        .setSourceBucket(bucketName);
+    volumeWithLinkedBucket.createBucket(targetBucketName, argsBuilder.build());
+    // Used namespace should be 0 because linked bucket does not consume
+    // namespace quota
+    Assert.assertEquals(0L, volumeWithLinkedBucket.getUsedNamespace());
+
+    // Reset volume quota, the original usedNamespace needs to remain the same
+    store.getVolume(volumeName).setQuota(OzoneQuota.parseQuota(
+        100 + " GB", 100));
+    Assert.assertEquals(1L,
+        store.getVolume(volumeName).getUsedNamespace());
+
+    volume.deleteBucket(bucketName);
+    // Used namespace should be 0
+    volume = store.getVolume(volumeName);
+    Assert.assertEquals(0L, volume.getUsedNamespace());
+  }
+
+  @Test
+  public void testBucketUsedNamespace() throws IOException {
+    String volumeName = UUID.randomUUID().toString();
+    String bucketName = UUID.randomUUID().toString();
+    String key1 = UUID.randomUUID().toString();
+    String key2 = UUID.randomUUID().toString();
+    String key3 = UUID.randomUUID().toString();
+    OzoneVolume volume = null;
+    OzoneBucket bucket = null;
+
+    String value = "sample value";
+
+    store.createVolume(volumeName);
+    volume = store.getVolume(volumeName);
+    volume.createBucket(bucketName);
+    bucket = volume.getBucket(bucketName);
+    bucket.setQuota(OzoneQuota.parseQuota(Long.MAX_VALUE + " Bytes", 2));
+
+    writeKey(bucket, key1, ONE, value, value.length());
+    Assert.assertEquals(1L,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedNamespace());
+
+    writeKey(bucket, key2, ONE, value, value.length());
+    Assert.assertEquals(2L,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedNamespace());
+
+    try {
+      writeKey(bucket, key3, ONE, value, value.length());
+      Assert.fail("Write key should be failed");
+    } catch (IOException ex) {
+      GenericTestUtils.assertExceptionContains("QUOTA_EXCEEDED", ex);
+    }
+
+    // Write failed, bucket usedNamespace should remain as 2
+    Assert.assertEquals(2L,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedNamespace());
+
+    // Reset bucket quota, the original usedNamespace needs to remain the same
+    bucket.setQuota(OzoneQuota.parseQuota(Long.MAX_VALUE + " Bytes", 100));
+    Assert.assertEquals(2L,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedNamespace());
+
+    bucket.deleteKeys(Arrays.asList(key1, key2));
+    Assert.assertEquals(0L,
+        store.getVolume(volumeName).getBucket(bucketName).getUsedNamespace());
+  }
+
   private void writeKey(OzoneBucket bucket, String keyName,
       ReplicationFactor replication, String value, int valueLength)
       throws IOException{
@@ -3212,4 +3341,24 @@
           deletedKeyMetadata.containsKey(OzoneConsts.GDPR_ALGORITHM));
     }
   }
+
+
+  @Test
+  public void setS3VolumeAcl() throws Exception {
+    OzoneObj s3vVolume = new OzoneObjInfo.Builder()
+        .setVolumeName(HddsClientUtils.getS3VolumeName(cluster.getConf()))
+        .setResType(OzoneObj.ResourceType.VOLUME)
+        .setStoreType(OzoneObj.StoreType.OZONE)
+        .build();
+
+    OzoneAcl ozoneAcl = new OzoneAcl(USER, remoteUserName, WRITE, DEFAULT);
+
+    boolean result = store.addAcl(s3vVolume, ozoneAcl);
+
+    Assert.assertTrue("SetAcl on default s3v failed", result);
+
+    List<OzoneAcl> ozoneAclList = store.getAcl(s3vVolume);
+
+    Assert.assertTrue(ozoneAclList.contains(ozoneAcl));
+  }
 }
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
index 9058d34..5e5ce74 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
@@ -36,10 +36,11 @@
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.ratis.conf.RatisClientConfig;
 import org.apache.hadoop.hdds.scm.OzoneClientConfig;
-import org.apache.hadoop.hdds.scm.XceiverClientManager;
 import org.apache.hadoop.hdds.scm.XceiverClientRatis;
+import org.apache.hadoop.hdds.scm.XceiverClientManager;
 import org.apache.hadoop.hdds.scm.XceiverClientReply;
 import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.client.HddsClientUtils;
 import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
@@ -107,6 +108,7 @@
     conf.setTimeDuration(OZONE_SCM_PIPELINE_DESTROY_TIMEOUT, 10,
             TimeUnit.SECONDS);
     conf.setQuietMode(false);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
 
     RatisClientConfig ratisClientConfig =
         conf.getObject(RatisClientConfig.class);
@@ -136,6 +138,7 @@
         .setStreamBufferSizeUnit(StorageUnit.BYTES)
         .build();
     cluster.waitForClusterToBeReady();
+    cluster.waitForPipelineTobeReady(HddsProtos.ReplicationFactor.THREE, 60000);
     //the easiest way to create an open container is creating a key
     client = OzoneClientFactory.getRpcClient(conf);
     objectStore = client.getObjectStore();
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/TestHelper.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/TestHelper.java
index 8c18262..bad6054 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/TestHelper.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/TestHelper.java
@@ -24,10 +24,13 @@
 import java.util.concurrent.TimeoutException;
 import org.apache.hadoop.hdds.client.ReplicationType;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.ratis.RatisHelper;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
 import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
+import org.apache.hadoop.hdds.scm.container.ContainerReplica;
 import org.apache.hadoop.hdds.scm.events.SCMEvents;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException;
@@ -46,16 +49,22 @@
 
 import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
 import org.apache.hadoop.test.GenericTestUtils;
-import org.apache.ratis.protocol.RaftGroupId;
 import org.apache.ratis.server.RaftServer;
 import org.apache.ratis.statemachine.StateMachine;
 import org.junit.Assert;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static java.util.stream.Collectors.toList;
 
 /**
  * Helpers for container tests.
  */
 public final class TestHelper {
 
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestHelper.class);
+
   /**
    * Never constructed.
    */
@@ -94,6 +103,19 @@
     return false;
   }
 
+  public static int countReplicas(long containerID,
+      Set<HddsDatanodeService> datanodes) {
+    int count = 0;
+    for (HddsDatanodeService datanodeService : datanodes) {
+      Container<?> container = datanodeService.getDatanodeStateMachine()
+          .getContainer().getContainerSet().getContainer(containerID);
+      if (container != null) {
+        count++;
+      }
+    }
+    return count;
+  }
+
   public static OzoneOutputStream createKey(String keyName,
       ReplicationType type, long size, ObjectStore objectStore,
       String volumeName, String bucketName) throws Exception {
@@ -211,12 +233,14 @@
 
     // wait for the pipeline to get destroyed in the datanodes
     for (Pipeline pipeline : pipelineList) {
+      HddsProtos.PipelineID pipelineId = pipeline.getId().getProtobuf();
       for (DatanodeDetails dn : pipeline.getNodes()) {
         XceiverServerSpi server =
             cluster.getHddsDatanodes().get(cluster.getHddsDatanodeIndex(dn))
                 .getDatanodeStateMachine().getContainer().getWriteChannel();
         Assert.assertTrue(server instanceof XceiverServerRatis);
-        server.removeGroup(pipeline.getId().getProtobuf());
+        GenericTestUtils.waitFor(() -> !server.isExist(pipelineId),
+            100, 30_000);
       }
     }
   }
@@ -300,13 +324,12 @@
 
   private static RaftServer.Division getRaftServerDivision(
       HddsDatanodeService dn, Pipeline pipeline) throws Exception {
-    XceiverServerSpi serverSpi = dn.getDatanodeStateMachine().
-        getContainer().getWriteChannel();
-    RaftServer server = (((XceiverServerRatis) serverSpi).getServer());
-    RaftGroupId groupId =
-        pipeline == null ? server.getGroupIds().iterator().next() :
-            RatisHelper.newRaftGroup(pipeline).getGroupId();
-    return server.getDivision(groupId);
+    XceiverServerRatis server =
+        (XceiverServerRatis) (dn.getDatanodeStateMachine().
+            getContainer().getWriteChannel());
+    return pipeline == null ? server.getServerDivision() :
+        server.getServerDivision(
+            RatisHelper.newRaftGroup(pipeline).getGroupId());
   }
 
   public static StateMachine getStateMachine(HddsDatanodeService dn,
@@ -317,8 +340,48 @@
   public static HddsDatanodeService getDatanodeService(OmKeyLocationInfo info,
       MiniOzoneCluster cluster) throws IOException {
     DatanodeDetails dnDetails =  info.getPipeline().
-            getFirstNode();
+        getFirstNode();
     return cluster.getHddsDatanodes().get(cluster.
-            getHddsDatanodeIndex(dnDetails));
+        getHddsDatanodeIndex(dnDetails));
+  }
+
+  public static Set<HddsDatanodeService> getDatanodeServices(
+      MiniOzoneCluster cluster, Pipeline pipeline) {
+    Set<HddsDatanodeService> services = new HashSet<>();
+    Set<DatanodeDetails> pipelineNodes = pipeline.getNodeSet();
+    for (HddsDatanodeService service : cluster.getHddsDatanodes()) {
+      if (pipelineNodes.contains(service.getDatanodeDetails())) {
+        services.add(service);
+      }
+    }
+    Assert.assertEquals(pipelineNodes.size(), services.size());
+    return services;
+  }
+
+  public static int countReplicas(long containerID, MiniOzoneCluster cluster) {
+    ContainerManager containerManager = cluster.getStorageContainerManager()
+        .getContainerManager();
+    try {
+      Set<ContainerReplica> replicas = containerManager
+          .getContainerReplicas(ContainerID.valueof(containerID));
+      LOG.info("Container {} has {} replicas on {}", containerID,
+          replicas.size(),
+          replicas.stream()
+              .map(ContainerReplica::getDatanodeDetails)
+              .map(DatanodeDetails::getUuidString)
+              .sorted()
+              .collect(toList())
+      );
+      return replicas.size();
+    } catch (ContainerNotFoundException e) {
+      LOG.warn("Container {} not found", containerID);
+      return 0;
+    }
+  }
+
+  public static void waitForReplicaCount(long containerID, int count,
+      MiniOzoneCluster cluster) throws TimeoutException, InterruptedException {
+    GenericTestUtils.waitFor(() -> countReplicas(containerID, cluster) == count,
+        1000, 30_000);
   }
 }
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/metrics/TestContainerMetrics.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/metrics/TestContainerMetrics.java
index ae8aae9..b25d4b0 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/metrics/TestContainerMetrics.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/metrics/TestContainerMetrics.java
@@ -47,9 +47,6 @@
 import org.apache.hadoop.ozone.container.common.volume.HddsVolume;
 import org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet;
 import org.apache.hadoop.ozone.container.common.volume.VolumeSet;
-import org.apache.hadoop.ozone.container.ozoneimpl.ContainerController;
-import org.apache.hadoop.ozone.container.replication.GrpcReplicationService;
-import org.apache.hadoop.ozone.container.replication.OnDemandContainerReplicationSource;
 import org.apache.hadoop.test.GenericTestUtils;
 
 import com.google.common.collect.Maps;
@@ -74,12 +71,6 @@
   @Rule
   public Timeout timeout = new Timeout(300000);
 
-  private GrpcReplicationService createReplicationService(
-      ContainerController controller) {
-    return new GrpcReplicationService(
-        new OnDemandContainerReplicationSource(controller));
-  }
-
   @Test
   public void testContainerMetrics() throws Exception {
     XceiverServerGrpc server = null;
@@ -123,9 +114,7 @@
           volumeSet, handlers, context, metrics, null);
       dispatcher.setScmId(UUID.randomUUID().toString());
 
-      server = new XceiverServerGrpc(datanodeDetails, conf, dispatcher, null,
-          createReplicationService(new ContainerController(
-              containerSet, handlers)));
+      server = new XceiverServerGrpc(datanodeDetails, conf, dispatcher, null);
       client = new XceiverClientGrpc(pipeline, conf);
 
       server.start();
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerServer.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerServer.java
index 77ca936..a29453d 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerServer.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerServer.java
@@ -18,55 +18,6 @@
 
 package org.apache.hadoop.ozone.container.server;
 
-import com.google.common.collect.Maps;
-import org.apache.hadoop.hdds.HddsConfigKeys;
-import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
-import org.apache.hadoop.hdds.scm.pipeline.MockPipeline;
-import org.apache.hadoop.hdds.security.x509.SecurityConfig;
-import org.apache.hadoop.hdds.security.x509.certificate.client.CertificateClient;
-import org.apache.hadoop.hdds.security.x509.certificate.client.DNCertificateClient;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerMetrics;
-import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
-import org.apache.hadoop.ozone.container.common.impl.HddsDispatcher;
-import org.apache.hadoop.ozone.container.common.impl.TestHddsDispatcher;
-import org.apache.hadoop.ozone.container.common.interfaces.Handler;
-import org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine;
-import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
-import org.apache.hadoop.ozone.container.common.transport.server.ratis.DispatcherContext;
-import org.apache.hadoop.ozone.container.common.volume.VolumeSet;
-import org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet;
-import org.apache.hadoop.ozone.container.ozoneimpl.ContainerController;
-import org.apache.hadoop.ozone.container.replication.GrpcReplicationService;
-import org.apache.hadoop.ozone.container.replication.OnDemandContainerReplicationSource;
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
-import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
-    .ContainerCommandRequestProto;
-import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
-    .ContainerCommandResponseProto;
-
-import org.apache.hadoop.ozone.OzoneConfigKeys;
-import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.ozone.RatisTestHelper;
-import org.apache.hadoop.ozone.container.ContainerTestHelper;
-import org.apache.hadoop.ozone.container.common.interfaces.ContainerDispatcher;
-import org.apache.hadoop.ozone.container.common.transport.server.XceiverServerGrpc;
-import org.apache.hadoop.ozone.container.common.transport.server.XceiverServerSpi;
-import org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis;
-import org.apache.hadoop.ozone.web.utils.OzoneUtils;
-import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
-import org.apache.hadoop.hdds.scm.XceiverClientRatis;
-import org.apache.hadoop.hdds.scm.XceiverClientSpi;
-import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
-import org.apache.hadoop.test.GenericTestUtils;
-import org.apache.ratis.rpc.RpcType;
-import org.apache.ratis.util.function.CheckedBiConsumer;
-import org.junit.Assert;
-import org.junit.BeforeClass;
-import org.junit.Ignore;
-import org.junit.Test;
-import org.mockito.Mockito;
-
 import java.io.File;
 import java.io.IOException;
 import java.util.ArrayList;
@@ -74,9 +25,53 @@
 import java.util.Map;
 import java.util.UUID;
 
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
+import org.apache.hadoop.hdds.scm.XceiverClientRatis;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.hdds.scm.pipeline.MockPipeline;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.hdds.security.x509.certificate.client.CertificateClient;
+import org.apache.hadoop.hdds.security.x509.certificate.client.DNCertificateClient;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.RatisTestHelper;
+import org.apache.hadoop.ozone.container.ContainerTestHelper;
+import org.apache.hadoop.ozone.container.common.helpers.ContainerMetrics;
+import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
+import org.apache.hadoop.ozone.container.common.impl.HddsDispatcher;
+import org.apache.hadoop.ozone.container.common.impl.TestHddsDispatcher;
+import org.apache.hadoop.ozone.container.common.interfaces.ContainerDispatcher;
+import org.apache.hadoop.ozone.container.common.interfaces.Handler;
+import org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine;
+import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.container.common.transport.server.XceiverServerGrpc;
+import org.apache.hadoop.ozone.container.common.transport.server.XceiverServerSpi;
+import org.apache.hadoop.ozone.container.common.transport.server.ratis.DispatcherContext;
+import org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis;
+import org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet;
+import org.apache.hadoop.ozone.container.common.volume.VolumeSet;
+import org.apache.hadoop.ozone.container.ozoneimpl.ContainerController;
+import org.apache.hadoop.ozone.web.utils.OzoneUtils;
+import org.apache.hadoop.test.GenericTestUtils;
+
+import com.google.common.collect.Maps;
 import static org.apache.hadoop.hdds.protocol.MockDatanodeDetails.randomDatanodeDetails;
+import org.apache.ratis.rpc.RpcType;
 import static org.apache.ratis.rpc.SupportedRpcType.GRPC;
 import static org.apache.ratis.rpc.SupportedRpcType.NETTY;
+import org.apache.ratis.util.function.CheckedBiConsumer;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+import org.mockito.Mockito;
 import static org.mockito.Mockito.mock;
 
 /**
@@ -89,12 +84,6 @@
   private static final OzoneConfiguration CONF = new OzoneConfiguration();
   private static CertificateClient caClient;
 
-  private GrpcReplicationService createReplicationService(
-      ContainerController containerController) {
-    return new GrpcReplicationService(
-        new OnDemandContainerReplicationSource(containerController));
-  }
-
   @BeforeClass
   static public void setup() {
     CONF.set(HddsConfigKeys.HDDS_METADATA_DIR_NAME, TEST_DIR);
@@ -113,8 +102,7 @@
                     .getPort(DatanodeDetails.Port.Name.STANDALONE).getValue()),
         XceiverClientGrpc::new,
         (dn, conf) -> new XceiverServerGrpc(datanodeDetails, conf,
-            new TestContainerDispatcher(), caClient,
-            createReplicationService(controller)), (dn, p) -> {
+            new TestContainerDispatcher(), caClient), (dn, p) -> {
         });
   }
 
@@ -238,8 +226,7 @@
       dispatcher.init();
 
       server = new XceiverServerGrpc(datanodeDetails, conf, dispatcher,
-          caClient, createReplicationService(
-              new ContainerController(containerSet, null)));
+          caClient);
       client = new XceiverClientGrpc(pipeline, conf);
 
       server.start();
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestSecureContainerServer.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestSecureContainerServer.java
index c319c1a..f050e2a 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestSecureContainerServer.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestSecureContainerServer.java
@@ -18,11 +18,16 @@
 
 package org.apache.hadoop.ozone.container.server;
 
-import com.google.common.collect.Maps;
-import org.apache.commons.io.FileUtils;
-import org.apache.commons.lang3.RandomStringUtils;
-import org.apache.commons.lang3.RandomUtils;
-import org.apache.commons.lang3.exception.ExceptionUtils;
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.EnumSet;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.function.Consumer;
+
 import org.apache.hadoop.hdds.HddsConfigKeys;
 import org.apache.hadoop.hdds.client.BlockID;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
@@ -31,6 +36,7 @@
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandRequestProto;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.BlockTokenSecretProto.AccessModeProto;
 import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
 import org.apache.hadoop.hdds.scm.XceiverClientRatis;
 import org.apache.hadoop.hdds.scm.XceiverClientSpi;
@@ -55,8 +61,8 @@
 import org.apache.hadoop.ozone.container.common.transport.server.XceiverServerGrpc;
 import org.apache.hadoop.ozone.container.common.transport.server.XceiverServerSpi;
 import org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis;
-import org.apache.hadoop.ozone.container.common.volume.VolumeSet;
 import org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet;
+import org.apache.hadoop.ozone.container.common.volume.VolumeSet;
 import org.apache.hadoop.ozone.container.ozoneimpl.ContainerController;
 import org.apache.hadoop.ozone.container.replication.GrpcReplicationService;
 import org.apache.hadoop.ozone.container.replication.OnDemandContainerReplicationSource;
@@ -65,34 +71,32 @@
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.LambdaTestUtils;
 import org.apache.hadoop.util.Time;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.BlockTokenSecretProto.AccessModeProto;
 
-import org.apache.ratis.rpc.RpcType;
-import org.apache.ratis.util.function.CheckedBiConsumer;
-import org.junit.After;
-import org.junit.Assert;
-import org.junit.BeforeClass;
-import org.junit.Ignore;
-import org.junit.Test;
-import org.mockito.Mockito;
-
-import java.io.File;
-import java.io.IOException;
-import java.nio.file.Paths;
-import java.util.ArrayList;
-import java.util.EnumSet;
-import java.util.List;
-import java.util.Map;
-import java.util.UUID;
-import java.util.function.Consumer;
-
+import com.google.common.collect.Maps;
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.commons.lang3.RandomUtils;
+import org.apache.commons.lang3.exception.ExceptionUtils;
 import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_BLOCK_TOKEN_ENABLED;
 import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.SUCCESS;
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.HDDS_DATANODE_DIR_KEY;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SECURITY_ENABLED_KEY;
-import static org.apache.hadoop.ozone.container.ContainerTestHelper.*;
+import static org.apache.hadoop.ozone.container.ContainerTestHelper.getPutBlockRequest;
+import static org.apache.hadoop.ozone.container.ContainerTestHelper.getTestBlockID;
+import static org.apache.hadoop.ozone.container.ContainerTestHelper.getTestContainerID;
+import static org.apache.hadoop.ozone.container.ContainerTestHelper.getWriteChunkRequest;
+import org.apache.ratis.rpc.RpcType;
 import static org.apache.ratis.rpc.SupportedRpcType.GRPC;
-import static org.junit.Assert.*;
+import org.apache.ratis.util.function.CheckedBiConsumer;
+import org.junit.After;
+import org.junit.Assert;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+import org.mockito.Mockito;
 
 /**
  * Test Container servers when security is enabled.
@@ -138,8 +142,7 @@
                     .getPort(DatanodeDetails.Port.Name.STANDALONE).getValue()),
         XceiverClientGrpc::new,
         (dn, conf) -> new XceiverServerGrpc(dd, conf,
-            hddsDispatcher, caClient,
-            createReplicationService(controller)), (dn, p) -> {}, (p) -> {});
+            hddsDispatcher, caClient), (dn, p) -> {}, (p) -> {});
   }
 
   private static HddsDispatcher createDispatcher(DatanodeDetails dd, UUID scmId,
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
index f8d4863..e3f1c67 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
@@ -1181,7 +1181,7 @@
       KeyManagerImpl keyManagerImpl =
           new KeyManagerImpl(ozoneManager, scmClientMock, conf, "om1");
 
-      keyManagerImpl.refreshPipeline(omKeyInfo);
+      keyManagerImpl.refresh(omKeyInfo);
 
       verify(sclProtocolMock, times(1))
           .getContainerWithPipelineBatch(containerIDs);
@@ -1226,7 +1226,7 @@
           new KeyManagerImpl(ozoneManager, scmClientMock, conf, "om1");
 
       try {
-        keyManagerImpl.refreshPipeline(omKeyInfo);
+        keyManagerImpl.refresh(omKeyInfo);
         Assert.fail();
       } catch (OMException omEx) {
         Assert.assertEquals(SCM_GET_PIPELINE_EXCEPTION, omEx.getResult());
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOMEpochForNonRatis.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOMEpochForNonRatis.java
new file mode 100644
index 0000000..f6f017f
--- /dev/null
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOMEpochForNonRatis.java
@@ -0,0 +1,179 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om;
+
+import java.util.HashMap;
+import java.util.UUID;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.protocolPB.OmTransportFactory;
+import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import static org.apache.hadoop.ozone.OmUtils.EPOCH_ID_SHIFT;
+import static org.apache.hadoop.ozone.OmUtils.EPOCH_WHEN_RATIS_NOT_ENABLED;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_ENABLE_KEY;
+
+/**
+ * Tests OM epoch generation for when Ratis is not enabled.
+ */
+public class TestOMEpochForNonRatis {
+  private static MiniOzoneCluster cluster = null;
+  private static OzoneConfiguration conf;
+  private static String clusterId;
+  private static String scmId;
+  private static String omId;
+
+  @Rule
+  public Timeout timeout = new Timeout(240_000);
+
+  @BeforeClass
+  public static void init() throws Exception {
+    conf = new OzoneConfiguration();
+    clusterId = UUID.randomUUID().toString();
+    scmId = UUID.randomUUID().toString();
+    omId = UUID.randomUUID().toString();
+    conf.setBoolean(OZONE_OM_RATIS_ENABLE_KEY, false);
+    cluster =  MiniOzoneCluster.newBuilder(conf)
+        .setClusterId(clusterId)
+        .setScmId(scmId)
+        .setOmId(omId)
+        .build();
+    cluster.waitForClusterToBeReady();
+
+  }
+
+  /**
+   * Shutdown MiniDFSCluster.
+   */
+  @AfterClass
+  public static void shutdown() {
+    if (cluster != null) {
+      cluster.shutdown();
+    }
+  }
+
+  @Test
+  public void testUniqueTrxnIndexOnOMRestart() throws Exception {
+    // When OM is restarted, the transaction index for requests should not
+    // start from 0. It should incrementally increase from the last
+    // transaction index which was stored in DB before restart.
+
+    String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
+    String bucketName = "bucket" + RandomStringUtils.randomNumeric(5);
+    String keyName = "key" + RandomStringUtils.randomNumeric(5);
+
+    OzoneManager om = cluster.getOzoneManager();
+    OzoneClient client = cluster.getClient();
+    ObjectStore objectStore = client.getObjectStore();
+
+    UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
+    OzoneManagerProtocolClientSideTranslatorPB omClient =
+        new OzoneManagerProtocolClientSideTranslatorPB(
+            OmTransportFactory.create(conf, ugi, null),
+            RandomStringUtils.randomAscii(5));
+
+    objectStore.createVolume(volumeName);
+
+    // Verify that the last transactionIndex stored in DB after volume
+    // creation equals the transaction index corresponding to volume's
+    // objectID. Also, the volume transaction index should be 1 as this is
+    // the first transaction in this cluster.
+    OmVolumeArgs volumeInfo = omClient.getVolumeInfo(volumeName);
+    long volumeTrxnIndex = OmUtils.getTxIdFromObjectId(
+        volumeInfo.getObjectID());
+    Assert.assertEquals(1, volumeTrxnIndex);
+    Assert.assertEquals(volumeTrxnIndex, om.getLastTrxnIndexForNonRatis());
+
+    OzoneVolume ozoneVolume = objectStore.getVolume(volumeName);
+    ozoneVolume.createBucket(bucketName);
+
+    // Verify last transactionIndex is updated after bucket creation
+    OmBucketInfo bucketInfo = omClient.getBucketInfo(volumeName, bucketName);
+    long bucketTrxnIndex = OmUtils.getTxIdFromObjectId(
+        bucketInfo.getObjectID());
+    Assert.assertEquals(2, bucketTrxnIndex);
+    Assert.assertEquals(bucketTrxnIndex, om.getLastTrxnIndexForNonRatis());
+
+    // Restart the OM and create new object
+    cluster.restartOzoneManager();
+
+    String data = "random data";
+    OzoneOutputStream ozoneOutputStream = ozoneVolume.getBucket(bucketName)
+        .createKey(keyName, data.length(), ReplicationType.RATIS,
+            ReplicationFactor.ONE, new HashMap<>());
+    ozoneOutputStream.write(data.getBytes(), 0, data.length());
+    ozoneOutputStream.close();
+
+    // Verify last transactionIndex is updated after key creation and the
+    // transaction index after restart is incremented from the last
+    // transaction index before restart.
+    OmKeyInfo omKeyInfo = omClient.lookupKey(new OmKeyArgs.Builder()
+        .setVolumeName(volumeName)
+        .setBucketName(bucketName)
+        .setKeyName(keyName)
+        .setRefreshPipeline(true).build());
+    long keyTrxnIndex = OmUtils.getTxIdFromObjectId(
+        omKeyInfo.getObjectID());
+    Assert.assertEquals(3, keyTrxnIndex);
+    // Key commit is a separate transaction. Hence, the last trxn index in DB
+    // should be 1 more than KeyTrxnIndex
+    Assert.assertEquals(4, om.getLastTrxnIndexForNonRatis());
+  }
+
+  @Test
+  public void testEpochIntegrationInObjectID() throws Exception {
+    // Create a volume and check the objectID has the epoch as
+    // EPOCH_FOR_RATIS_NOT_ENABLED in the first 2 bits.
+
+    OzoneClient client = cluster.getClient();
+    ObjectStore objectStore = client.getObjectStore();
+
+    String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
+    objectStore.createVolume(volumeName);
+
+    UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
+    OzoneManagerProtocolClientSideTranslatorPB omClient =
+        new OzoneManagerProtocolClientSideTranslatorPB(
+            OmTransportFactory.create(conf, ugi, null),
+            RandomStringUtils.randomAscii(5));
+
+    long volObjId = omClient.getVolumeInfo(volumeName).getObjectID();
+    long epochInVolObjId = volObjId >> EPOCH_ID_SHIFT;
+
+    Assert.assertEquals(EPOCH_WHEN_RATIS_NOT_ENABLED, epochInVolObjId);
+  }
+}
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
index 4a2ccbb..effe32f 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
@@ -35,6 +35,7 @@
 import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
 import org.apache.hadoop.ozone.client.rpc.RpcClient;
 import org.apache.hadoop.ozone.om.ha.OMFailoverProxyProvider;
+import org.apache.hadoop.ozone.om.ratis.OzoneManagerRatisServerConfig;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.Before;
 import org.junit.After;
@@ -46,6 +47,7 @@
 
 import java.io.IOException;
 import java.net.ConnectException;
+import java.time.Duration;
 import java.util.UUID;
 import java.util.HashMap;
 
@@ -77,6 +79,7 @@
   private static final int OZONE_CLIENT_FAILOVER_MAX_ATTEMPTS = 5;
   private static final int IPC_CLIENT_CONNECT_MAX_RETRIES = 4;
   private static final long SNAPSHOT_THRESHOLD = 50;
+  private static final Duration RETRY_CACHE_DURATION = Duration.ofSeconds(30);
 
   @Rule
   public ExpectedException exception = ExpectedException.none();
@@ -116,6 +119,10 @@
     return OZONE_CLIENT_FAILOVER_MAX_ATTEMPTS;
   }
 
+  public static Duration getRetryCacheDuration() {
+    return RETRY_CACHE_DURATION;
+  }
+
   /**
    * Create a MiniDFSCluster for testing.
    * <p>
@@ -144,6 +151,13 @@
         OMConfigKeys.OZONE_OM_RATIS_SNAPSHOT_AUTO_TRIGGER_THRESHOLD_KEY,
         SNAPSHOT_THRESHOLD);
 
+    OzoneManagerRatisServerConfig omHAConfig =
+        conf.getObject(OzoneManagerRatisServerConfig.class);
+
+    omHAConfig.setRetryCacheTimeout(RETRY_CACHE_DURATION);
+
+    conf.setFromObject(omHAConfig);
+
     /**
      * config for key deleting service.
      */
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java
index fbe1762..84a1b17 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java
@@ -65,7 +65,7 @@
 import static org.apache.hadoop.ozone.MiniOzoneHAClusterImpl.NODE_FAILURE_TIMEOUT;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_CLIENT_WAIT_BETWEEN_RETRIES_MILLIS_DEFAULT;
 
-import static org.apache.ratis.server.metrics.RaftLogMetrics.RATIS_APPLICATION_NAME_METRICS;
+import static org.apache.ratis.metrics.RatisMetrics.RATIS_APPLICATION_NAME_METRICS;
 import static org.junit.Assert.fail;
 
 /**
@@ -395,11 +395,16 @@
             .setCmdType(OzoneManagerProtocolProtos.Type.CreateVolume).build();
 
     RaftClientReply raftClientReply =
-        raftServer.submitClientRequest(new RaftClientRequest(clientId,
-         raftServer.getId(), ozoneManagerRatisServer.getRaftGroup()
-         .getGroupId(), callId,
-        Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
-        RaftClientRequest.writeRequestType(), null));
+        raftServer.submitClientRequest(RaftClientRequest.newBuilder()
+            .setClientId(clientId)
+            .setServerId(raftServer.getId())
+            .setGroupId(ozoneManagerRatisServer.getRaftGroup().getGroupId())
+            .setCallId(callId)
+            .setMessage(
+                Message.valueOf(
+                    OMRatisHelper.convertRequestToByteString(omRequest)))
+            .setType(RaftClientRequest.writeRequestType())
+            .build());
 
     Assert.assertTrue(raftClientReply.isSuccess());
 
@@ -409,18 +414,48 @@
     logCapturer.clearOutput();
 
     raftClientReply =
-        raftServer.submitClientRequest(new RaftClientRequest(clientId,
-            raftServer.getId(), ozoneManagerRatisServer.getRaftGroup()
-            .getGroupId(), callId, Message.valueOf(
-                OMRatisHelper.convertRequestToByteString(omRequest)),
-            RaftClientRequest.writeRequestType(), null));
+        raftServer.submitClientRequest(RaftClientRequest.newBuilder()
+            .setClientId(clientId)
+            .setServerId(raftServer.getId())
+            .setGroupId(ozoneManagerRatisServer.getRaftGroup().getGroupId())
+            .setCallId(callId)
+            .setMessage(
+                Message.valueOf(
+                    OMRatisHelper.convertRequestToByteString(omRequest)))
+            .setType(RaftClientRequest.writeRequestType())
+            .build());
 
     Assert.assertTrue(raftClientReply.isSuccess());
 
     // As second time with same client id and call id, this request should
     // not be executed ratis server should return from cache.
-    Assert.assertFalse(logCapturer.getOutput().contains("created volume:"
-        + volumeName));
+    // If 2nd time executed, it will fail with Volume creation failed. check
+    // for that.
+    Assert.assertFalse(logCapturer.getOutput().contains(
+        "Volume creation failed"));
+
+    //Sleep for little above retry cache duration to get cache clear.
+    Thread.sleep(getRetryCacheDuration().toMillis() + 5000);
+
+    raftClientReply =
+        raftServer.submitClientRequest(RaftClientRequest.newBuilder()
+            .setClientId(clientId)
+            .setServerId(raftServer.getId())
+            .setGroupId(ozoneManagerRatisServer.getRaftGroup().getGroupId())
+            .setCallId(callId)
+            .setMessage(
+                Message.valueOf(
+                    OMRatisHelper.convertRequestToByteString(omRequest)))
+            .setType(RaftClientRequest.writeRequestType())
+            .build());
+
+    Assert.assertTrue(raftClientReply.isSuccess());
+
+    // As second time with same client id and call id, this request should
+    // be executed by ratis server as we are sending this request after cache
+    // expiry duration.
+    Assert.assertTrue(logCapturer.getOutput().contains(
+        "Volume creation failed"));
 
   }
 
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
index 70eb8d4..bdaca53 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerRestart.java
@@ -26,27 +26,17 @@
 import org.apache.hadoop.hdds.client.ReplicationType;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
-import org.apache.hadoop.ozone.OmUtils;
 import org.apache.hadoop.ozone.client.ObjectStore;
 import org.apache.hadoop.ozone.client.OzoneBucket;
 import org.apache.hadoop.ozone.client.OzoneClient;
 import org.apache.hadoop.ozone.client.OzoneKey;
 import org.apache.hadoop.ozone.client.OzoneVolume;
 import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
-import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
-import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
-import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
-import org.apache.hadoop.ozone.om.protocolPB.OmTransportFactory;
-import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
-import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.test.GenericTestUtils;
 
 import org.apache.commons.lang3.RandomStringUtils;
 
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_RATIS_PIPELINE_LIMIT;
-import static org.apache.hadoop.ozone.OmUtils.EPOCH_ID_SHIFT;
-import static org.apache.hadoop.ozone.OmUtils.EPOCH_WHEN_RATIS_NOT_ENABLED;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS_WILDCARD;
@@ -217,95 +207,4 @@
     Assert.assertTrue(ozoneKey.getReplicationType().equals(
         ReplicationType.RATIS));
   }
-
-  @Test
-  public void testUniqueTrxnIndexOnOMRestart() throws Exception {
-    // When OM is restarted, the transaction index for requests should not
-    // start from 0. It should incrementally increase from the last
-    // transaction index which was stored in DB before restart.
-
-    String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
-    String bucketName = "bucket" + RandomStringUtils.randomNumeric(5);
-    String keyName = "key" + RandomStringUtils.randomNumeric(5);
-
-    OzoneManager om = cluster.getOzoneManager();
-    OzoneClient client = cluster.getClient();
-    ObjectStore objectStore = client.getObjectStore();
-
-    UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
-    OzoneManagerProtocolClientSideTranslatorPB omClient =
-        new OzoneManagerProtocolClientSideTranslatorPB(
-            OmTransportFactory.create(conf, ugi, null),
-            RandomStringUtils.randomAscii(5));
-
-    objectStore.createVolume(volumeName);
-
-    // Verify that the last transactionIndex stored in DB after volume
-    // creation equals the transaction index corresponding to volume's
-    // objectID. Also, the volume transaction index should be 1 as this is
-    // the first transaction in this cluster.
-    OmVolumeArgs volumeInfo = omClient.getVolumeInfo(volumeName);
-    long volumeTrxnIndex = OmUtils.getTxIdFromObjectId(
-        volumeInfo.getObjectID());
-    Assert.assertEquals(1, volumeTrxnIndex);
-    Assert.assertEquals(volumeTrxnIndex, om.getLastTrxnIndexForNonRatis());
-
-    OzoneVolume ozoneVolume = objectStore.getVolume(volumeName);
-    ozoneVolume.createBucket(bucketName);
-
-    // Verify last transactionIndex is updated after bucket creation
-    OmBucketInfo bucketInfo = omClient.getBucketInfo(volumeName, bucketName);
-    long bucketTrxnIndex = OmUtils.getTxIdFromObjectId(
-        bucketInfo.getObjectID());
-    Assert.assertEquals(2, bucketTrxnIndex);
-    Assert.assertEquals(bucketTrxnIndex, om.getLastTrxnIndexForNonRatis());
-
-    // Restart the OM and create new object
-    cluster.restartOzoneManager();
-
-    String data = "random data";
-    OzoneOutputStream ozoneOutputStream = ozoneVolume.getBucket(bucketName)
-        .createKey(keyName, data.length(), ReplicationType.RATIS,
-            ReplicationFactor.ONE, new HashMap<>());
-    ozoneOutputStream.write(data.getBytes(), 0, data.length());
-    ozoneOutputStream.close();
-
-    // Verify last transactionIndex is updated after key creation and the
-    // transaction index after restart is incremented from the last
-    // transaction index before restart.
-    OmKeyInfo omKeyInfo = omClient.lookupKey(new OmKeyArgs.Builder()
-        .setVolumeName(volumeName)
-        .setBucketName(bucketName)
-        .setKeyName(keyName)
-        .setRefreshPipeline(true).build());
-    long keyTrxnIndex = OmUtils.getTxIdFromObjectId(
-        omKeyInfo.getObjectID());
-    Assert.assertEquals(3, keyTrxnIndex);
-    // Key commit is a separate transaction. Hence, the last trxn index in DB
-    // should be 1 more than KeyTrxnIndex
-    Assert.assertEquals(4, om.getLastTrxnIndexForNonRatis());
-  }
-
-  @Test
-  public void testEpochIntegrationInObjectID() throws Exception {
-    // Create a volume and check the objectID has the epoch as
-    // EPOCH_FOR_RATIS_NOT_ENABLED in the first 2 bits.
-
-    OzoneClient client = cluster.getClient();
-    ObjectStore objectStore = client.getObjectStore();
-
-    String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
-    objectStore.createVolume(volumeName);
-
-    UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
-    OzoneManagerProtocolClientSideTranslatorPB omClient =
-        new OzoneManagerProtocolClientSideTranslatorPB(
-        OmTransportFactory.create(conf, ugi, null),
-        RandomStringUtils.randomAscii(5));
-
-    long volObjId = omClient.getVolumeInfo(volumeName).getObjectID();
-    long epochInVolObjId = volObjId >> EPOCH_ID_SHIFT;
-
-    Assert.assertEquals(EPOCH_WHEN_RATIS_NOT_ENABLED, epochInVolObjId);
-  }
 }
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/parser/TestOMRatisLogParser.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/parser/TestOMRatisLogParser.java
index d015000..71e438f 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/parser/TestOMRatisLogParser.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/parser/TestOMRatisLogParser.java
@@ -104,19 +104,11 @@
 
     String[] ratisDirs = omMetaDir.list();
     Assert.assertNotNull(ratisDirs);
-    Assert.assertEquals(2, ratisDirs.length);
+    Assert.assertEquals(1, ratisDirs.length);
 
-    File groupDir = null;
-    for (int i=0; i< ratisDirs.length; i++) {
-      if (ratisDirs[i].equals("snapshot")) {
-        continue;
-      }
-      groupDir = new File(omMetaDir, ratisDirs[i]);
-    }
+    File groupDir = new File(omMetaDir, ratisDirs[0]);
 
     Assert.assertNotNull(groupDir);
-    Assert.assertFalse(groupDir.toString(),
-        groupDir.getName().contains("snapshot"));
     Assert.assertTrue(groupDir.isDirectory());
     File currentDir = new File(groupDir, "current");
     File logFile = new File(currentDir, "log_inprogress_0");
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconWithOzoneManagerHA.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconWithOzoneManagerHA.java
index be146fc..2ff79de 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconWithOzoneManagerHA.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/recon/TestReconWithOzoneManagerHA.java
@@ -103,7 +103,7 @@
     Assert.assertNotNull("Timed out waiting OM leader election to finish: "
         + "no leader or more than one leader.", ozoneManager);
     Assert.assertTrue("Should have gotten the leader!",
-        ozoneManager.get().isLeader());
+        ozoneManager.get().isLeaderReady());
 
     OzoneManagerServiceProviderImpl impl = (OzoneManagerServiceProviderImpl)
         cluster.getReconServer().getOzoneManagerServiceProvider();
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMNodeManagerMXBean.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMNodeManagerMXBean.java
index 5eba2f5..fd4d3db 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMNodeManagerMXBean.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMNodeManagerMXBean.java
@@ -100,10 +100,32 @@
             + "name=SCMNodeManagerInfo");
 
     TabularData data = (TabularData) mbs.getAttribute(bean, "NodeCount");
-    Map<String, Integer> nodeCount = scm.getScmNodeManager().getNodeCount();
-    Map<String, Long> nodeCountLong = new HashMap<>();
-    nodeCount.forEach((k, v) -> nodeCountLong.put(k, new Long(v)));
-    verifyEquals(data, nodeCountLong);
+    Map<String, Map<String, Integer>> mbeanMap = convertNodeCountToMap(data);
+    Map<String, Map<String, Integer>> nodeMap =
+        scm.getScmNodeManager().getNodeCount();
+    assertTrue(nodeMap.equals(mbeanMap));
+  }
+
+  private Map<String, Map<String, Integer>> convertNodeCountToMap(
+      TabularData data) {
+    Map<String, Map<String, Integer>> map = new HashMap<>();
+    for (Object o : data.values()) {
+      CompositeData cds = (CompositeData) o;
+      Iterator<?> it = cds.values().iterator();
+      String opState = it.next().toString();
+      TabularData states = (TabularData) it.next();
+
+      Map<String, Integer> healthStates = new HashMap<>();
+      for (Object obj : states.values()) {
+        CompositeData stateData = (CompositeData) obj;
+        Iterator<?> stateIt = stateData.values().iterator();
+        String health = stateIt.next().toString();
+        Integer value = Integer.parseInt(stateIt.next().toString());
+        healthStates.put(health, value);
+      }
+      map.put(opState, healthStates);
+    }
+    return map;
   }
 
   private void verifyEquals(TabularData actualData, Map<String, Long>
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestDecommissionAndMaintenance.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestDecommissionAndMaintenance.java
new file mode 100644
index 0000000..c42f5a8
--- /dev/null
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestDecommissionAndMaintenance.java
@@ -0,0 +1,709 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.scm.node;
+
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.cli.ContainerOperationClient;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
+import org.apache.hadoop.hdds.scm.container.ContainerReplica;
+import org.apache.hadoop.hdds.scm.container.ContainerReplicaCount;
+import org.apache.hadoop.hdds.scm.container.ReplicationManager.ReplicationManagerConfiguration;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.TestDataUtil;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.stream.Collectors;
+
+import static java.util.concurrent.TimeUnit.SECONDS;
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_COMMAND_STATUS_REPORT_INTERVAL;
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_CONTAINER_REPORT_INTERVAL;
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_HEARTBEAT_INTERVAL;
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_NODE_REPORT_INTERVAL;
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_PIPELINE_REPORT_INTERVAL;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.DECOMMISSIONED;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.DECOMMISSIONING;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.ENTERING_MAINTENANCE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.IN_MAINTENANCE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeOperationalState.IN_SERVICE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DEADNODE_INTERVAL;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_HEARTBEAT_PROCESS_INTERVAL;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_STALENODE_INTERVAL;
+import static org.junit.Assert.fail;
+
+/**
+ * Test from the scmclient for decommission and maintenance.
+ */
+
+public class TestDecommissionAndMaintenance {
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TestDecommissionAndMaintenance.class);
+
+  private static int numOfDatanodes = 6;
+  private static String bucketName = "bucket1";
+  private static String volName = "vol1";
+  private OzoneBucket bucket;
+  private MiniOzoneCluster cluster;
+  private NodeManager nm;
+  private ContainerManager cm;
+  private PipelineManager pm;
+  private StorageContainerManager scm;
+
+  private ContainerOperationClient scmClient;
+
+  @Before
+  public void setUp() throws Exception {
+    OzoneConfiguration conf = new OzoneConfiguration();
+    final int interval = 100;
+
+    conf.setTimeDuration(OZONE_SCM_HEARTBEAT_PROCESS_INTERVAL,
+        interval, TimeUnit.MILLISECONDS);
+    conf.setTimeDuration(HDDS_HEARTBEAT_INTERVAL, 1, SECONDS);
+    conf.setInt(ScmConfigKeys.OZONE_DATANODE_PIPELINE_LIMIT, 1);
+    conf.setTimeDuration(HDDS_PIPELINE_REPORT_INTERVAL, 1, SECONDS);
+    conf.setTimeDuration(HDDS_COMMAND_STATUS_REPORT_INTERVAL, 1, SECONDS);
+    conf.setTimeDuration(HDDS_CONTAINER_REPORT_INTERVAL, 1, SECONDS);
+    conf.setTimeDuration(HDDS_NODE_REPORT_INTERVAL, 1, SECONDS);
+    conf.setTimeDuration(OZONE_SCM_STALENODE_INTERVAL, 3, SECONDS);
+    conf.setTimeDuration(OZONE_SCM_DEADNODE_INTERVAL, 6, SECONDS);
+    conf.setTimeDuration(OZONE_SCM_DATANODE_ADMIN_MONITOR_INTERVAL,
+        1, SECONDS);
+
+    ReplicationManagerConfiguration replicationConf =
+        conf.getObject(ReplicationManagerConfiguration.class);
+    replicationConf.setInterval(Duration.ofSeconds(1));
+    conf.setFromObject(replicationConf);
+
+    cluster = MiniOzoneCluster.newBuilder(conf)
+        .setNumDatanodes(numOfDatanodes)
+        .build();
+    cluster.waitForClusterToBeReady();
+    setManagers();
+
+    bucket = TestDataUtil.createVolumeAndBucket(cluster, volName, bucketName);
+    scmClient = new ContainerOperationClient(conf);
+  }
+
+  @After
+  public void tearDown() {
+    if (cluster != null) {
+      cluster.shutdown();
+    }
+  }
+
+  @Test
+  // Decommissioning a node with open pipelines should close the pipelines
+  // and hence the open containers and then the containers should be replicated
+  // by the replication manager.
+  public void testNodeWithOpenPipelineCanBeDecommissioned()
+      throws Exception {
+    // Generate some data on the empty cluster to create some containers
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+
+    // Locate any container and find its open pipeline
+    final ContainerInfo container = waitForAndReturnContainer();
+    Pipeline pipeline = pm.getPipeline(container.getPipelineID());
+    assertEquals(Pipeline.PipelineState.OPEN, pipeline.getPipelineState());
+    Set<ContainerReplica> replicas = getContainerReplicas(container);
+
+    final DatanodeDetails toDecommission = getOneDNHostingReplica(replicas);
+    scmClient.decommissionNodes(Arrays.asList(
+        getDNHostAndPort(toDecommission)));
+
+    waitForDnToReachOpState(toDecommission, DECOMMISSIONED);
+    // Ensure one node transitioned to DECOMMISSIONING
+    List<DatanodeDetails> decomNodes = nm.getNodes(
+        DECOMMISSIONED,
+        HEALTHY);
+    assertEquals(1, decomNodes.size());
+
+    // Should now be 4 replicas online as the DN is still alive but
+    // in the DECOMMISSIONED state.
+    waitForContainerReplicas(container, 4);
+
+    // Stop the decommissioned DN
+    cluster.shutdownHddsDatanode(toDecommission);
+    waitForDnToReachHealthState(toDecommission, DEAD);
+
+    // Now the decommissioned node is dead, we should have
+    // 3 replicas for the tracked container.
+    waitForContainerReplicas(container, 3);
+  }
+
+  @Test
+  // After a SCM restart, it will have forgotten all the Operational states.
+  // However the state will have been persisted on the DNs. Therefore on initial
+  // registration, the DN operationalState is the source of truth and SCM should
+  // be updated to reflect that.
+  public void testDecommissionedStateReinstatedAfterSCMRestart()
+      throws Exception {
+    // Decommission any node and wait for it to be DECOMMISSIONED
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+    DatanodeDetails dn = nm.getAllNodes().get(0);
+    scmClient.decommissionNodes(Arrays.asList(getDNHostAndPort(dn)));
+    waitForDnToReachOpState(dn, DECOMMISSIONED);
+
+    cluster.restartStorageContainerManager(true);
+    setManagers();
+    DatanodeDetails newDn = nm.getNodeByUuid(dn.getUuid().toString());
+
+    // On initial registration, the DN should report its operational state
+    // and if it is decommissioned, that should be updated in the NodeStatus
+    waitForDnToReachOpState(newDn, DECOMMISSIONED);
+    // Also confirm the datanodeDetails correctly reflect the operational
+    // state.
+    waitForDnToReachPersistedOpState(newDn, DECOMMISSIONED);
+  }
+
+  @Test
+  // If a node has not yet completed decommission and SCM is restarted, then
+  // when it re-registers it should re-enter the decommission workflow and
+  // complete decommissioning.
+  public void testDecommissioningNodesCompleteDecommissionOnSCMRestart()
+      throws Exception {
+    // First stop the replicationManager so nodes marked for decommission cannot
+    // make any progress. THe node will be stuck DECOMMISSIONING
+    stopReplicationManager();
+    // Generate some data and then pick a DN to decommission which is hosting a
+    // container. This ensures it will not decommission immediately due to
+    // having no containers.
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+    final ContainerInfo container = waitForAndReturnContainer();
+    final DatanodeDetails dn
+        = getOneDNHostingReplica(getContainerReplicas(container));
+    scmClient.decommissionNodes(Arrays.asList(getDNHostAndPort(dn)));
+
+    // Wait for the state to be persisted on the DN so it can report it on
+    // restart of SCM.
+    waitForDnToReachPersistedOpState(dn, DECOMMISSIONING);
+    cluster.restartStorageContainerManager(true);
+    setManagers();
+
+    // After the SCM restart, the DN should report as DECOMMISSIONING, then
+    // it should re-enter the decommission workflow and move to DECOMMISSIONED
+    DatanodeDetails newDn = nm.getNodeByUuid(dn.getUuid().toString());
+    waitForDnToReachOpState(newDn, DECOMMISSIONED);
+    waitForDnToReachPersistedOpState(newDn, DECOMMISSIONED);
+  }
+
+  @Test
+  // If a node was decommissioned, and then stopped so it is dead. Then it is
+  // recommissioned in SCM and restarted, the SCM state should be taken as the
+  // source of truth and the node will go to the IN_SERVICE state and the state
+  // should be updated on the DN.
+  public void testStoppedDecommissionedNodeTakesSCMStateOnRestart()
+      throws Exception {
+    // Decommission node and wait for it to be DECOMMISSIONED
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+
+    DatanodeDetails dn = nm.getAllNodes().get(0);
+    scmClient.decommissionNodes(Arrays.asList(getDNHostAndPort(dn)));
+    waitForDnToReachOpState(dn, DECOMMISSIONED);
+    waitForDnToReachPersistedOpState(dn, DECOMMISSIONED);
+
+    int dnIndex = cluster.getHddsDatanodeIndex(dn);
+    cluster.shutdownHddsDatanode(dnIndex);
+    waitForDnToReachHealthState(dn, DEAD);
+
+    // Datanode is shutdown and dead. Now recommission it in SCM
+    scmClient.recommissionNodes(Arrays.asList(getDNHostAndPort(dn)));
+
+    // Now restart it and ensure it remains IN_SERVICE
+    cluster.restartHddsDatanode(dnIndex, true);
+    DatanodeDetails newDn = nm.getNodeByUuid(dn.getUuid().toString());
+
+    // As this is not an initial registration since SCM was started, the DN
+    // should report its operational state and if it differs from what SCM
+    // has, then the SCM state should be used and the DN state updated.
+    waitForDnToReachHealthState(newDn, HEALTHY);
+    waitForDnToReachOpState(newDn, IN_SERVICE);
+    waitForDnToReachPersistedOpState(dn, IN_SERVICE);
+  }
+
+  @Test
+  // A node which is decommissioning or decommissioned can be move back to
+  // IN_SERVICE.
+  public void testDecommissionedNodeCanBeRecommissioned() throws Exception {
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+    DatanodeDetails dn = nm.getAllNodes().get(0);
+    scmClient.decommissionNodes(Arrays.asList(getDNHostAndPort(dn)));
+
+    GenericTestUtils.waitFor(
+        () -> !dn.getPersistedOpState()
+            .equals(IN_SERVICE),
+        200, 30000);
+
+    scmClient.recommissionNodes(Arrays.asList(getDNHostAndPort(dn)));
+    waitForDnToReachOpState(dn, IN_SERVICE);
+    waitForDnToReachPersistedOpState(dn, IN_SERVICE);
+  }
+
+  @Test
+  // When putting a single node into maintenance, its pipelines should be closed
+  // but no new replicas should be create and the node should transition into
+  // maintenance
+  public void testSingleNodeWithOpenPipelineCanGotoMaintenance()
+      throws Exception {
+    // Generate some data on the empty cluster to create some containers
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+
+    // Locate any container and find its open pipeline
+    final ContainerInfo container = waitForAndReturnContainer();
+    Pipeline pipeline = pm.getPipeline(container.getPipelineID());
+    assertEquals(Pipeline.PipelineState.OPEN, pipeline.getPipelineState());
+    Set<ContainerReplica> replicas = getContainerReplicas(container);
+
+    final DatanodeDetails dn = getOneDNHostingReplica(replicas);
+    scmClient.startMaintenanceNodes(Arrays.asList(
+        getDNHostAndPort(dn)), 0);
+
+    waitForDnToReachOpState(dn, IN_MAINTENANCE);
+    waitForDnToReachPersistedOpState(dn, IN_MAINTENANCE);
+
+    // Should still be 3 replicas online as no replication should happen for
+    // maintenance
+    Set<ContainerReplica> newReplicas =
+        cm.getContainerReplicas(container.containerID());
+    assertEquals(3, newReplicas.size());
+
+    // Stop the maintenance DN
+    cluster.shutdownHddsDatanode(dn);
+    waitForDnToReachHealthState(dn, DEAD);
+
+    // Now the maintenance node is dead, we should still have
+    // 3 replicas as we don't purge the replicas for a dead maintenance node
+    newReplicas = cm.getContainerReplicas(container.containerID());
+    assertEquals(3, newReplicas.size());
+
+    // Restart the DN and it should keep the IN_MAINTENANCE state
+    cluster.restartHddsDatanode(dn, true);
+    DatanodeDetails newDN = nm.getNodeByUuid(dn.getUuid().toString());
+    waitForDnToReachHealthState(newDN, HEALTHY);
+    waitForDnToReachPersistedOpState(newDN, IN_MAINTENANCE);
+  }
+
+  @Test
+  // After a node enters maintenance and is stopped, it can be recommissioned in
+  // SCM. Then when it is restarted, it should go back to IN_SERVICE and have
+  // that persisted on the DN.
+  public void testStoppedMaintenanceNodeTakesScmStateOnRestart()
+      throws Exception {
+    // Put a node into maintenance and wait for it to complete
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+    DatanodeDetails dn = nm.getAllNodes().get(0);
+    scmClient.startMaintenanceNodes(Arrays.asList(getDNHostAndPort(dn)), 0);
+    waitForDnToReachOpState(dn, IN_MAINTENANCE);
+    waitForDnToReachPersistedOpState(dn, IN_MAINTENANCE);
+
+    int dnIndex = cluster.getHddsDatanodeIndex(dn);
+    cluster.shutdownHddsDatanode(dnIndex);
+    waitForDnToReachHealthState(dn, DEAD);
+
+    // Datanode is shutdown and dead. Now recommission it in SCM
+    scmClient.recommissionNodes(Arrays.asList(getDNHostAndPort(dn)));
+
+    // Now restart it and ensure it remains IN_SERVICE
+    cluster.restartHddsDatanode(dnIndex, true);
+    DatanodeDetails newDn = nm.getNodeByUuid(dn.getUuid().toString());
+
+    // As this is not an initial registration since SCM was started, the DN
+    // should report its operational state and if it differs from what SCM
+    // has, then the SCM state should be used and the DN state updated.
+    waitForDnToReachHealthState(newDn, HEALTHY);
+    waitForDnToReachOpState(newDn, IN_SERVICE);
+    waitForDnToReachPersistedOpState(dn, IN_SERVICE);
+  }
+
+  @Test
+  // By default a node can enter maintenance if there are two replicas left
+  // available when the maintenance nodes are stopped. Therefore putting all
+  // nodes hosting a replica to maintenance should cause new replicas to get
+  // created before the nodes can enter maintenance. When the maintenance nodes
+  // return, the excess replicas should be removed.
+  public void testContainerIsReplicatedWhenAllNodesGotoMaintenance()
+      throws Exception {
+    // Generate some data on the empty cluster to create some containers
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+    // Locate any container and find its open pipeline
+    final ContainerInfo container = waitForAndReturnContainer();
+    Set<ContainerReplica> replicas = getContainerReplicas(container);
+
+    List<DatanodeDetails> forMaintenance = new ArrayList<>();
+    replicas.forEach(r ->forMaintenance.add(r.getDatanodeDetails()));
+
+    scmClient.startMaintenanceNodes(forMaintenance.stream()
+        .map(d -> getDNHostAndPort(d))
+        .collect(Collectors.toList()), 0);
+
+    // Ensure all 3 DNs go to maintenance
+    for(DatanodeDetails dn : forMaintenance) {
+      waitForDnToReachPersistedOpState(dn, IN_MAINTENANCE);
+    }
+
+    // There should now be 5-6 replicas of the container we are tracking
+    Set<ContainerReplica> newReplicas =
+        cm.getContainerReplicas(container.containerID());
+    assertTrue(newReplicas.size() >= 5);
+
+    scmClient.recommissionNodes(forMaintenance.stream()
+        .map(d -> getDNHostAndPort(d))
+        .collect(Collectors.toList()));
+
+    // Ensure all 3 DNs go to maintenance
+    for(DatanodeDetails dn : forMaintenance) {
+      waitForDnToReachOpState(dn, IN_SERVICE);
+    }
+
+    waitForContainerReplicas(container, 3);
+  }
+
+  @Test
+  // If SCM is restarted when a node is ENTERING_MAINTENANCE, then when the node
+  // re-registers, it should continue to enter maintenance.
+  public void testEnteringMaintenanceNodeCompletesAfterSCMRestart()
+      throws Exception {
+    // Stop Replication Manager to sure no containers are replicated
+    stopReplicationManager();
+    // Generate some data on the empty cluster to create some containers
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+    // Locate any container and find its open pipeline
+    final ContainerInfo container = waitForAndReturnContainer();
+    Set<ContainerReplica> replicas = getContainerReplicas(container);
+
+    List<DatanodeDetails> forMaintenance = new ArrayList<>();
+    replicas.forEach(r ->forMaintenance.add(r.getDatanodeDetails()));
+
+    scmClient.startMaintenanceNodes(forMaintenance.stream()
+        .map(d -> getDNHostAndPort(d))
+        .collect(Collectors.toList()), 0);
+
+    // Ensure all 3 DNs go to entering_maintenance
+    for(DatanodeDetails dn : forMaintenance) {
+      waitForDnToReachPersistedOpState(dn, ENTERING_MAINTENANCE);
+    }
+    cluster.restartStorageContainerManager(true);
+    setManagers();
+
+    List<DatanodeDetails> newDns = new ArrayList<>();
+    for(DatanodeDetails dn : forMaintenance) {
+      newDns.add(nm.getNodeByUuid(dn.getUuid().toString()));
+    }
+
+    // Ensure all 3 DNs go to maintenance
+    for(DatanodeDetails dn : newDns) {
+      waitForDnToReachOpState(dn, IN_MAINTENANCE);
+    }
+
+    // There should now be 5-6 replicas of the container we are tracking
+    Set<ContainerReplica> newReplicas =
+        cm.getContainerReplicas(container.containerID());
+    assertTrue(newReplicas.size() >= 5);
+  }
+
+  @Test
+  // For a node which is online the maintenance should end automatically when
+  // maintenance expires and the node should go back into service.
+  // If the node is dead when maintenance expires, its replicas will be purge
+  // and new replicas created.
+  public void testMaintenanceEndsAutomaticallyAtTimeout()
+      throws Exception {
+    // Generate some data on the empty cluster to create some containers
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+    ContainerInfo container = waitForAndReturnContainer();
+    DatanodeDetails dn =
+        getOneDNHostingReplica(getContainerReplicas(container));
+
+    scmClient.startMaintenanceNodes(Arrays.asList(getDNHostAndPort(dn)), 0);
+    waitForDnToReachPersistedOpState(dn, IN_MAINTENANCE);
+
+    long newEndTime = System.currentTimeMillis() / 1000 + 5;
+    // Update the maintenance end time via NM manually. As the current
+    // decommission interface only allows us to specify hours from now as the
+    // end time, that is not really suitable for a test like this.
+    nm.setNodeOperationalState(dn, IN_MAINTENANCE, newEndTime);
+    waitForDnToReachOpState(dn, IN_SERVICE);
+    waitForDnToReachPersistedOpState(dn, IN_SERVICE);
+
+    // Put the node back into maintenance and then stop it and wait for it to
+    // go dead
+    scmClient.startMaintenanceNodes(Arrays.asList(getDNHostAndPort(dn)), 0);
+    waitForDnToReachPersistedOpState(dn, IN_MAINTENANCE);
+    cluster.shutdownHddsDatanode(dn);
+    waitForDnToReachHealthState(dn, DEAD);
+
+    newEndTime = System.currentTimeMillis() / 1000 + 5;
+    nm.setNodeOperationalState(dn, IN_MAINTENANCE, newEndTime);
+    waitForDnToReachOpState(dn, IN_SERVICE);
+    // Ensure there are 3 replicas not including the dead node, indicating a new
+    // replica was created
+    GenericTestUtils.waitFor(() -> getContainerReplicas(container)
+            .stream()
+            .filter(r -> !r.getDatanodeDetails().equals(dn))
+            .count() == 3,
+        200, 30000);
+  }
+
+  @Test
+  // If is SCM is Restarted when a maintenance node is dead, then we lose all
+  // the replicas associated with it, as the dead node cannot report them back
+  // in. If that happens, SCM has no choice except to replicate the containers.
+  public void testSCMHandlesRestartForMaintenanceNode()
+      throws Exception {
+    // Generate some data on the empty cluster to create some containers
+    generateData(20, "key", ReplicationFactor.THREE, ReplicationType.RATIS);
+    ContainerInfo container = waitForAndReturnContainer();
+    DatanodeDetails dn =
+        getOneDNHostingReplica(getContainerReplicas(container));
+
+    scmClient.startMaintenanceNodes(Arrays.asList(getDNHostAndPort(dn)), 0);
+    waitForDnToReachPersistedOpState(dn, IN_MAINTENANCE);
+
+    cluster.restartStorageContainerManager(true);
+    setManagers();
+
+    // Ensure there are 3 replicas with one in maintenance indicating no new
+    // replicas were created
+    final ContainerInfo newContainer = cm.getContainer(container.containerID());
+    waitForContainerReplicas(newContainer, 3);
+
+    ContainerReplicaCount counts =
+        scm.getReplicationManager().getContainerReplicaCount(newContainer);
+    assertEquals(1, counts.getMaintenanceCount());
+    assertTrue(counts.isSufficientlyReplicated());
+
+    // The node should be added back to the decommission monitor to ensure
+    // maintenance end time is correctly tracked.
+    GenericTestUtils.waitFor(() -> scm.getScmDecommissionManager().getMonitor()
+        .getTrackedNodes().size() == 1, 200, 30000);
+
+    // Now let the node go dead and repeat the test. This time ensure a new
+    // replica is created.
+    cluster.shutdownHddsDatanode(dn);
+    waitForDnToReachHealthState(dn, DEAD);
+
+    cluster.restartStorageContainerManager(false);
+    setManagers();
+
+    GenericTestUtils.waitFor(()
+        -> nm.getNodeCount(IN_SERVICE, null) == 5, 200, 30000);
+
+    // Ensure there are 3 replicas not including the dead node, indicating a new
+    // replica was created
+    final ContainerInfo nextContainer
+        = cm.getContainer(container.containerID());
+    waitForContainerReplicas(nextContainer, 3);
+    // There should be no IN_MAINTENANCE node:
+    assertEquals(0, nm.getNodeCount(IN_MAINTENANCE, null));
+    counts = scm.getReplicationManager().getContainerReplicaCount(newContainer);
+    assertEquals(0, counts.getMaintenanceCount());
+    assertTrue(counts.isSufficientlyReplicated());
+  }
+
+  /**
+   * Sets the instance variables to the values for the current MiniCluster.
+   */
+  private void setManagers() {
+    scm = cluster.getStorageContainerManager();
+    nm = scm.getScmNodeManager();
+    cm = scm.getContainerManager();
+    pm = scm.getPipelineManager();
+  }
+
+  /**
+   * Generates some data on the cluster so the cluster has some containers.
+   * @param keyCount The number of keys to create
+   * @param keyPrefix The prefix to use for the key name.
+   * @param repFactor The replication Factor for the keys
+   * @param repType The replication Type for the keys
+   * @throws IOException
+   */
+  private void generateData(int keyCount, String keyPrefix,
+      ReplicationFactor repFactor, ReplicationType repType) throws IOException {
+    for (int i=0; i<keyCount; i++) {
+      TestDataUtil.createKey(bucket, keyPrefix + i, repFactor, repType,
+          "this is the content");
+    }
+  }
+
+  /**
+   * Retrieves the NodeStatus for the given DN or fails the test if the
+   * Node cannot be found. This is a helper method to allow the nodeStatus to be
+   * checked in lambda expressions.
+   * @param dn Datanode for which to retrieve the NodeStatus.
+   * @return
+   */
+  private NodeStatus getNodeStatus(DatanodeDetails dn) {
+    NodeStatus status = null;
+    try {
+      status = nm.getNodeStatus(dn);
+    } catch (NodeNotFoundException e) {
+      fail("Unexpected exception getting the nodeState");
+    }
+    return status;
+  }
+
+  /**
+   * Retrieves the containerReplica set for a given container or fails the test
+   * if the container cannot be found. This is a helper method to allow the
+   * container replica count to be checked in a lambda expression.
+   * @param c The container for which to retrieve replicas
+   * @return
+   */
+  private Set<ContainerReplica> getContainerReplicas(ContainerInfo c) {
+    Set<ContainerReplica> replicas = null;
+    try {
+      replicas = cm.getContainerReplicas(c.containerID());
+    } catch (ContainerNotFoundException e) {
+      fail("Unexpected ContainerNotFoundException");
+    }
+    return replicas;
+  }
+
+  /**
+   * Select any DN hosting a replica from the Replica Set.
+   * @param replicas The set of ContainerReplica
+   * @return Any datanode associated one of the replicas
+   */
+  private DatanodeDetails getOneDNHostingReplica(
+      Set<ContainerReplica> replicas) {
+    // Now Decommission a host with one of the replicas
+    Iterator<ContainerReplica> iter = replicas.iterator();
+    ContainerReplica c = iter.next();
+    return c.getDatanodeDetails();
+  }
+
+  /**
+   * Given a Datanode, return a string consisting of the hostname and one of its
+   * ports in the for host:post.
+   * @param dn Datanode for which to retrieve the host:post string
+   * @return host:port for the given DN.
+   */
+  private String getDNHostAndPort(DatanodeDetails dn) {
+    return dn.getHostName()+":"+dn.getPorts().get(0).getValue();
+  }
+
+  /**
+   * Wait for the given datanode to reach the given operational state.
+   * @param dn Datanode for which to check the state
+   * @param state The state to wait for.
+   * @throws TimeoutException
+   * @throws InterruptedException
+   */
+  private void waitForDnToReachOpState(DatanodeDetails dn,
+      HddsProtos.NodeOperationalState state)
+      throws TimeoutException, InterruptedException {
+    GenericTestUtils.waitFor(
+        () -> getNodeStatus(dn).getOperationalState().equals(state),
+        200, 30000);
+  }
+
+  /**
+   * Wait for the given datanode to reach the given Health state.
+   * @param dn Datanode for which to check the state
+   * @param state The state to wait for.
+   * @throws TimeoutException
+   * @throws InterruptedException
+   */
+  private void waitForDnToReachHealthState(DatanodeDetails dn,
+      HddsProtos.NodeState state)
+      throws TimeoutException, InterruptedException {
+    GenericTestUtils.waitFor(
+        () -> getNodeStatus(dn).getHealth().equals(state),
+        200, 30000);
+  }
+
+  /**
+   * Wait for the given datanode to reach the given persisted state.
+   * @param dn Datanode for which to check the state
+   * @param state The state to wait for.
+   * @throws TimeoutException
+   * @throws InterruptedException
+   */
+  private void waitForDnToReachPersistedOpState(DatanodeDetails dn,
+      HddsProtos.NodeOperationalState state)
+      throws TimeoutException, InterruptedException {
+    GenericTestUtils.waitFor(
+        () -> dn.getPersistedOpState().equals(state),
+        200, 30000);
+  }
+
+  /**
+   * Get any container present in the cluster and wait to ensure 3 replicas
+   * have been reported before returning the container.
+   * @return A single container present on the cluster
+   * @throws Exception
+   */
+  private ContainerInfo waitForAndReturnContainer() throws Exception {
+    final ContainerInfo container = cm.getContainers().get(0);
+    // Ensure all 3 replicas of the container have been reported via ICR
+    waitForContainerReplicas(container, 3);
+    return container;
+  }
+
+  /**
+   * Wait for the ReplicationManager thread to start, and when it does, stop
+   * it.
+   * @throws Exception
+   */
+  private void stopReplicationManager() throws Exception {
+    GenericTestUtils.waitFor(
+        () -> scm.getReplicationManager().isRunning(),
+        200, 30000);
+    scm.getReplicationManager().stop();
+  }
+
+  private void waitForContainerReplicas(ContainerInfo container, int count)
+      throws TimeoutException, InterruptedException {
+    GenericTestUtils.waitFor(
+        () -> getContainerReplicas(container).size() == count,
+        200, 30000);
+  }
+
+}
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestQueryNode.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestQueryNode.java
index 5ac3a2b..2d25512 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestQueryNode.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/node/TestQueryNode.java
@@ -16,6 +16,9 @@
  */
 package org.apache.hadoop.ozone.scm.node;
 
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
@@ -46,6 +49,12 @@
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.DEAD;
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.HEALTHY;
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState.STALE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.
+    NodeOperationalState.IN_SERVICE;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.
+    NodeOperationalState.DECOMMISSIONING;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.
+    NodeOperationalState.IN_MAINTENANCE;
 
 import static org.apache.hadoop.hdds.scm.ScmConfigKeys
     .OZONE_SCM_DEADNODE_INTERVAL;
@@ -98,7 +107,7 @@
 
   @Test
   public void testHealthyNodesCount() throws Exception {
-    List<HddsProtos.Node> nodes = scmClient.queryNode(HEALTHY,
+    List<HddsProtos.Node> nodes = scmClient.queryNode(null, HEALTHY,
         HddsProtos.QueryScope.CLUSTER, "");
     assertEquals("Expected  live nodes", numOfDatanodes,
         nodes.size());
@@ -113,7 +122,7 @@
             cluster.getStorageContainerManager().getNodeCount(STALE) == 2,
         100, 4 * 1000);
 
-    int nodeCount = scmClient.queryNode(STALE,
+    int nodeCount = scmClient.queryNode(null, STALE,
         HddsProtos.QueryScope.CLUSTER, "").size();
     assertEquals("Mismatch of expected nodes count", 2, nodeCount);
 
@@ -122,13 +131,63 @@
         100, 4 * 1000);
 
     // Assert that we don't find any stale nodes.
-    nodeCount = scmClient.queryNode(STALE,
+    nodeCount = scmClient.queryNode(null, STALE,
         HddsProtos.QueryScope.CLUSTER, "").size();
     assertEquals("Mismatch of expected nodes count", 0, nodeCount);
 
     // Assert that we find the expected number of dead nodes.
-    nodeCount = scmClient.queryNode(DEAD,
+    nodeCount = scmClient.queryNode(null, DEAD,
         HddsProtos.QueryScope.CLUSTER, "").size();
     assertEquals("Mismatch of expected nodes count", 2, nodeCount);
   }
+
+  @Test
+  public void testNodeOperationalStates() throws Exception {
+    StorageContainerManager scm = cluster.getStorageContainerManager();
+    NodeManager nm = scm.getScmNodeManager();
+
+    // Set one node to be something other than IN_SERVICE
+    DatanodeDetails node = nm.getAllNodes().get(0);
+    nm.setNodeOperationalState(node, DECOMMISSIONING);
+
+    // All nodes should be returned as they are all in service
+    int nodeCount = scmClient.queryNode(IN_SERVICE, HEALTHY,
+        HddsProtos.QueryScope.CLUSTER, "").size();
+    assertEquals(numOfDatanodes - 1, nodeCount);
+
+    // null acts as wildcard for opState
+    nodeCount = scmClient.queryNode(null, HEALTHY,
+        HddsProtos.QueryScope.CLUSTER, "").size();
+    assertEquals(numOfDatanodes, nodeCount);
+
+    // null acts as wildcard for nodeState
+    nodeCount = scmClient.queryNode(IN_SERVICE, null,
+        HddsProtos.QueryScope.CLUSTER, "").size();
+    assertEquals(numOfDatanodes - 1, nodeCount);
+
+    // Both null - should return all nodes
+    nodeCount = scmClient.queryNode(null, null,
+        HddsProtos.QueryScope.CLUSTER, "").size();
+    assertEquals(numOfDatanodes, nodeCount);
+
+    // No node should be returned
+    nodeCount = scmClient.queryNode(IN_MAINTENANCE, HEALTHY,
+        HddsProtos.QueryScope.CLUSTER, "").size();
+    assertEquals(0, nodeCount);
+
+    // Test all operational states by looping over them all and setting the
+    // state manually.
+    node = nm.getAllNodes().get(0);
+    for (HddsProtos.NodeOperationalState s :
+        HddsProtos.NodeOperationalState.values()) {
+      nm.setNodeOperationalState(node, s);
+      nodeCount = scmClient.queryNode(s, HEALTHY,
+          HddsProtos.QueryScope.CLUSTER, "").size();
+      if (s == IN_SERVICE) {
+        assertEquals(5, nodeCount);
+      } else {
+        assertEquals(1, nodeCount);
+      }
+    }
+  }
 }
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/shell/TestOzoneShellHA.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/shell/TestOzoneShellHA.java
index 830a3d6..442225b 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/shell/TestOzoneShellHA.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/shell/TestOzoneShellHA.java
@@ -30,7 +30,7 @@
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.ozone.OFSPath;
+import org.apache.hadoop.ozone.OFSPath;
 import org.apache.hadoop.fs.ozone.OzoneFsShell;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.ozone.MiniOzoneCluster;
@@ -55,7 +55,6 @@
 import static org.junit.Assert.fail;
 import org.junit.Before;
 import org.junit.BeforeClass;
-import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.Timeout;
@@ -467,7 +466,6 @@
   }
 
   @Test
-  @Ignore("HDDS-3982. Disable moveToTrash in o3fs and ofs temporarily")
   public void testDeleteToTrashOrSkipTrash() throws Exception {
     final String hostPrefix = OZONE_OFS_URI_SCHEME + "://" + omServiceId;
     OzoneConfiguration clientConf = getClientConfForOFS(hostPrefix, conf);
diff --git a/hadoop-ozone/integration-test/src/test/resources/contract/ozone.xml b/hadoop-ozone/integration-test/src/test/resources/contract/ozone.xml
index 11da52c..c940549 100644
--- a/hadoop-ozone/integration-test/src/test/resources/contract/ozone.xml
+++ b/hadoop-ozone/integration-test/src/test/resources/contract/ozone.xml
@@ -112,6 +112,11 @@
     </property>
 
     <property>
+        <name>fs.contract.supports-unbuffer</name>
+        <value>true</value>
+    </property>
+
+    <property>
         <name>fs.contract.supports-unix-permissions</name>
         <value>false</value>
     </property>
diff --git a/hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto b/hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto
index 490e228..0455f94 100644
--- a/hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto
+++ b/hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto
@@ -334,12 +334,16 @@
 
     QUOTA_EXCEEDED = 66;
 
+
     PERSIST_UPGRADE_TO_LAYOUT_VERSION_FAILED = 67;
     REMOVE_UPGRADE_TO_LAYOUT_VERSION_FAILED = 68;
     UPDATE_LAYOUT_VERSION_FAILED = 69;
     LAYOUT_FEATURE_FINALIZATION_FAILED = 70;
     PREPARE_FAILED = 71;
     NOT_SUPPORTED_OPERATION_WHEN_PREPARED = 72;
+
+    QUOTA_ERROR = 73;
+
 }
 
 /**
@@ -385,8 +389,8 @@
     optional uint64 objectID = 8;
     optional uint64 updateID = 9;
     optional uint64 modificationTime = 10;
-    optional uint64 quotaInCounts = 11;
-
+    optional int64 quotaInNamespace = 11 [default = -2];
+    optional uint64 usedNamespace = 12;
 }
 
 /**
@@ -446,6 +450,7 @@
     optional uint64 quotaInBytes = 3;
     optional uint64 modificationTime = 4;
     optional uint64 quotaInCounts = 5;
+    optional uint64 quotaInNamespace = 6;
 }
 
 message SetVolumePropertyResponse {
@@ -525,8 +530,9 @@
     optional string sourceVolume = 12;
     optional string sourceBucket = 13;
     optional uint64 usedBytes = 14;
-    optional uint64 quotaInBytes = 15;
-    optional uint64 quotaInCounts = 16;
+    optional int64 quotaInBytes = 15 [default = -2];
+    optional int64 quotaInNamespace = 16 [default = -2];
+    optional uint64 usedNamespace = 17;
 }
 
 enum StorageTypeProto {
@@ -598,6 +604,7 @@
     repeated hadoop.hdds.KeyValue metadata = 7;
     optional uint64 quotaInBytes = 8;
     optional uint64 quotaInCounts = 9;
+    optional uint64 quotaInNamespace = 10;
 }
 
 message PrefixInfo {
diff --git a/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java b/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
index 5e4e75b..824a654 100644
--- a/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
+++ b/hadoop-ozone/interface-storage/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
@@ -17,11 +17,15 @@
 package org.apache.hadoop.ozone.om;
 
 import java.io.IOException;
+import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.utils.db.TableIterator;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
 import org.apache.hadoop.ozone.common.BlockGroup;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
@@ -387,4 +391,10 @@
    * @return table names in OM DB.
    */
   Set<String> listTableNames();
+
+  Iterator<Map.Entry<CacheKey<String>, CacheValue<OmBucketInfo>>>
+      getBucketIterator();
+
+  TableIterator<String, ? extends Table.KeyValue<String, OmKeyInfo>>
+      getKeyIterator();
 }
diff --git a/hadoop-ozone/ozone-manager/pom.xml b/hadoop-ozone/ozone-manager/pom.xml
index dab1d40..b7e5d29 100644
--- a/hadoop-ozone/ozone-manager/pom.xml
+++ b/hadoop-ozone/ozone-manager/pom.xml
@@ -33,13 +33,13 @@
     <dependency>
       <groupId>org.aspectj</groupId>
       <artifactId>aspectjrt</artifactId>
-      <version>1.8.9</version>
+      <version>1.9.1</version>
     </dependency>
 
     <dependency>
       <groupId>org.aspectj</groupId>
       <artifactId>aspectjweaver</artifactId>
-      <version>1.8.9</version>
+      <version>1.9.1</version>
     </dependency>
 
     <dependency>
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java
index 466a55f..7c9dabd 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java
@@ -129,7 +129,7 @@
       // OzoneManager can be null for testing
       return true;
     }
-    return ozoneManager.isLeader();
+    return ozoneManager.isLeaderReady();
   }
 
   private boolean isRatisEnabled() {
@@ -282,11 +282,16 @@
 
   private RaftClientRequest createRaftClientRequestForPurge(
       OMRequest omRequest) {
-    return new RaftClientRequest(clientId,
-        ozoneManager.getOmRatisServer().getRaftPeerId(),
-        ozoneManager.getOmRatisServer().getRaftGroupId(), runCount.get(),
-        Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
-        RaftClientRequest.writeRequestType(), null);
+    return RaftClientRequest.newBuilder()
+        .setClientId(clientId)
+        .setServerId(ozoneManager.getOmRatisServer().getRaftPeerId())
+        .setGroupId(ozoneManager.getOmRatisServer().getRaftGroupId())
+        .setCallId(runCount.get())
+        .setMessage(
+            Message.valueOf(
+                OMRatisHelper.convertRequestToByteString(omRequest)))
+        .setType(RaftClientRequest.writeRequestType())
+        .build();
   }
 
   /**
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
index 72ebfd2..03f639b 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
@@ -24,6 +24,7 @@
 import java.time.Instant;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.Collection;
 import java.util.Collections;
 import java.util.EnumSet;
 import java.util.HashMap;
@@ -47,13 +48,13 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.BlockTokenSecretProto.AccessModeProto;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
 import org.apache.hadoop.hdds.scm.container.common.helpers.AllocatedBlock;
 import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
 import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
 import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol;
 import org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol;
 import org.apache.hadoop.hdds.utils.BackgroundService;
@@ -109,9 +110,12 @@
 import com.google.common.base.Strings;
 import org.apache.commons.codec.digest.DigestUtils;
 import org.apache.commons.lang3.StringUtils;
+
 import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_KEY_PROVIDER_PATH;
 import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_BLOCK_TOKEN_ENABLED;
 import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_BLOCK_TOKEN_ENABLED_DEFAULT;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.BlockTokenSecretProto.AccessModeProto.READ;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.BlockTokenSecretProto.AccessModeProto.WRITE;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.DFS_CONTAINER_RATIS_ENABLED_DEFAULT;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.DFS_CONTAINER_RATIS_ENABLED_KEY;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_INTERVAL;
@@ -375,7 +379,7 @@
       if (grpcBlockTokenEnabled) {
         builder.setToken(secretManager
             .generateToken(remoteUser, allocatedBlock.getBlockID().toString(),
-                getAclForUser(remoteUser), scmBlockSize));
+                EnumSet.of(READ, WRITE), scmBlockSize));
       }
       locationInfos.add(builder.build());
     }
@@ -390,16 +394,6 @@
     return (ugi != null) ? ugi : UserGroupInformation.getCurrentUser();
   }
 
-  /**
-   * Return acl for user.
-   * @param user
-   *
-   * */
-  private EnumSet<AccessModeProto> getAclForUser(String user) {
-    // TODO: Return correct acl for user.
-    return EnumSet.allOf(AccessModeProto.class);
-  }
-
   private EncryptedKeyVersion generateEDEK(
       final String ezKeyName) throws IOException {
     if (ezKeyName == null) {
@@ -682,38 +676,34 @@
       }
       throw new OMException("Key not found", KEY_NOT_FOUND);
     }
+
+    // add block token for read.
+    addBlockToken4Read(value);
+
+    // Refresh container pipeline info from SCM
+    // based on OmKeyArgs.refreshPipeline flag
+    // value won't be null as the check is done inside try/catch block.
+    refresh(value);
+
+    if (args.getSortDatanodes()) {
+      sortDatanodes(clientAddress, value);
+    }
+    return value;
+  }
+
+  private void addBlockToken4Read(OmKeyInfo value) throws IOException {
+    Preconditions.checkNotNull(value, "OMKeyInfo cannot be null");
     if (grpcBlockTokenEnabled) {
       String remoteUser = getRemoteUser().getShortUserName();
       for (OmKeyLocationInfoGroup key : value.getKeyLocationVersions()) {
         key.getLocationList().forEach(k -> {
           k.setToken(secretManager.generateToken(remoteUser,
-                  k.getBlockID().getContainerBlockID().toString(),
-                  getAclForUser(remoteUser), k.getLength()));
+              k.getBlockID().getContainerBlockID().toString(),
+              EnumSet.of(READ), k.getLength()));
         });
       }
     }
-
-    // Refresh container pipeline info from SCM
-    // based on OmKeyArgs.refreshPipeline flag
-    // value won't be null as the check is done inside try/catch block.
-    refreshPipeline(value);
-
-    if (args.getSortDatanodes()) {
-      sortDatanodeInPipeline(value, clientAddress);
-    }
-    return value;
   }
-
-  /**
-   * Refresh pipeline info in OM by asking SCM.
-   * @param value OmKeyInfo
-   */
-  @VisibleForTesting
-  protected void refreshPipeline(OmKeyInfo value) throws IOException {
-    Preconditions.checkNotNull(value, "OMKeyInfo cannot be null");
-    refreshPipeline(Arrays.asList(value));
-  }
-
   /**
    * Refresh pipeline info in OM by asking SCM.
    * @param keyList a list of OmKeyInfo
@@ -930,7 +920,7 @@
 
     List<OmKeyInfo> keyList = metadataManager.listKeys(volumeName, bucketName,
         startKey, keyPrefix, maxKeys);
-    refreshPipeline(keyList);
+
     return keyList;
   }
 
@@ -1825,9 +1815,9 @@
         // refreshPipeline flag check has been removed as part of
         // https://issues.apache.org/jira/browse/HDDS-3658.
         // Please refer this jira for more details.
-        refreshPipeline(fileKeyInfo);
+        refresh(fileKeyInfo);
         if (sortDatanodes) {
-          sortDatanodeInPipeline(fileKeyInfo, clientAddress);
+          sortDatanodes(clientAddress, fileKeyInfo);
         }
         return new OzoneFileStatus(fileKeyInfo, scmBlockSize, false);
       }
@@ -1990,6 +1980,8 @@
             clientAddress);
       //if key is not of type file or if key is not found we throw an exception
     if (fileStatus.isFile()) {
+      // add block token for read.
+      addBlockToken4Read(fileStatus.getKeyInfo());
       return fileStatus.getKeyInfo();
     }
     throw new OMException("Can not write to directory: " + keyName,
@@ -2212,9 +2204,7 @@
     refreshPipeline(keyInfoList);
 
     if (args.getSortDatanodes()) {
-      for (OzoneFileStatus fileStatus : fileStatusList) {
-        sortDatanodeInPipeline(fileStatus.getKeyInfo(), clientAddress);
-      }
+      sortDatanodes(clientAddress, keyInfoList.toArray(new OmKeyInfo[0]));
     }
 
     return fileStatusList;
@@ -2309,38 +2299,67 @@
     return encInfo;
   }
 
-  private void sortDatanodeInPipeline(OmKeyInfo keyInfo, String clientMachine) {
-    if (keyInfo != null && clientMachine != null && !clientMachine.isEmpty()) {
-      for (OmKeyLocationInfoGroup key : keyInfo.getKeyLocationVersions()) {
-        key.getLocationList().forEach(k -> {
-          List<DatanodeDetails> nodes = k.getPipeline().getNodes();
-          if (nodes == null || nodes.isEmpty()) {
-            LOG.warn("Datanodes for pipeline {} is empty",
-                k.getPipeline().getId().toString());
-            return;
-          }
-          List<String> nodeList = new ArrayList<>();
-          nodes.stream().forEach(node ->
-              nodeList.add(node.getUuidString()));
-          try {
-            List<DatanodeDetails> sortedNodes = scmClient.getBlockClient()
-                .sortDatanodes(nodeList, clientMachine);
-            k.getPipeline().setNodesInOrder(sortedNodes);
-            if (LOG.isDebugEnabled()) {
-              LOG.debug("Sort datanodes {} for client {}, return {}", nodes,
-                  clientMachine, sortedNodes);
+  @VisibleForTesting
+  void sortDatanodes(String clientMachine, OmKeyInfo... keyInfos) {
+    if (keyInfos != null && clientMachine != null && !clientMachine.isEmpty()) {
+      Map<Set<String>, List<DatanodeDetails>> sortedPipelines = new HashMap<>();
+      for (OmKeyInfo keyInfo : keyInfos) {
+        OmKeyLocationInfoGroup key = keyInfo.getLatestVersionLocations();
+        if (key == null) {
+          LOG.warn("No location for key {}", keyInfo);
+          continue;
+        }
+        for (OmKeyLocationInfo k : key.getLocationList()) {
+          Pipeline pipeline = k.getPipeline();
+          List<DatanodeDetails> nodes = pipeline.getNodes();
+          List<String> uuidList = toNodeUuid(nodes);
+          Set<String> uuidSet = new HashSet<>(uuidList);
+          List<DatanodeDetails> sortedNodes = sortedPipelines.get(uuidSet);
+          if (sortedNodes == null) {
+            if (nodes.isEmpty()) {
+              LOG.warn("No datanodes in pipeline {}", pipeline.getId());
+              continue;
             }
-          } catch (IOException e) {
-            LOG.warn("Unable to sort datanodes based on distance to " +
-                "client, volume=" + keyInfo.getVolumeName() +
-                ", bucket=" + keyInfo.getBucketName() +
-                ", key=" + keyInfo.getKeyName() +
-                ", client=" + clientMachine +
-                ", datanodes=" + nodes.toString() +
-                ", exception=" + e.getMessage());
+            sortedNodes = sortDatanodes(clientMachine, nodes, keyInfo,
+                uuidList);
+            if (sortedNodes != null) {
+              sortedPipelines.put(uuidSet, sortedNodes);
+            }
+          } else if (LOG.isDebugEnabled()) {
+            LOG.debug("Found sorted datanodes for pipeline {} and client {} "
+                + "in cache", pipeline.getId(), clientMachine);
           }
-        });
+          pipeline.setNodesInOrder(sortedNodes);
+        }
       }
     }
   }
+
+  private List<DatanodeDetails> sortDatanodes(String clientMachine,
+      List<DatanodeDetails> nodes, OmKeyInfo keyInfo, List<String> nodeList) {
+    List<DatanodeDetails> sortedNodes = null;
+    try {
+      sortedNodes = scmClient.getBlockClient()
+          .sortDatanodes(nodeList, clientMachine);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Sorted datanodes {} for client {}, result: {}", nodes,
+            clientMachine, sortedNodes);
+      }
+    } catch (IOException e) {
+      LOG.warn("Unable to sort datanodes based on distance to client, "
+          + " volume={}, bucket={}, key={}, client={}, datanodes={}, "
+          + " exception={}",
+          keyInfo.getVolumeName(), keyInfo.getBucketName(),
+          keyInfo.getKeyName(), clientMachine, nodeList, e.getMessage());
+    }
+    return sortedNodes;
+  }
+
+  private static List<String> toNodeUuid(Collection<DatanodeDetails> nodes) {
+    List<String> nodeSet = new ArrayList<>(nodes.size());
+    for (DatanodeDetails node : nodes) {
+      nodeSet.add(node.getUuidString());
+    }
+    return nodeSet;
+  }
 }
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java
index 3504b91..310a8a6 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java
@@ -205,6 +205,10 @@
     numKeys.incr();
   }
 
+  public void incNumKeys(int count) {
+    numKeys.incr(count);
+  }
+
   public void decNumKeys() {
     numKeys.incr(-1);
   }
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
index 575f44a..5986129 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
@@ -43,7 +43,7 @@
 import org.apache.hadoop.hdds.utils.db.TypedTable;
 import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
 import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
-import org.apache.hadoop.hdds.utils.db.cache.TableCacheImpl;
+import org.apache.hadoop.hdds.utils.db.cache.TableCache.CacheType;
 import org.apache.hadoop.ozone.OmUtils;
 import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.common.BlockGroup;
@@ -373,17 +373,16 @@
             PersistedUserVolumeInfo.class);
     checkTableStatus(userTable, USER_TABLE);
 
-    TableCacheImpl.CacheCleanupPolicy cleanupPolicy =
-        TableCacheImpl.CacheCleanupPolicy.NEVER;
+    CacheType cacheType = CacheType.FULL_CACHE;
 
     volumeTable =
         this.store.getTable(VOLUME_TABLE, String.class, OmVolumeArgs.class,
-            cleanupPolicy);
+            cacheType);
     checkTableStatus(volumeTable, VOLUME_TABLE);
 
     bucketTable =
         this.store.getTable(BUCKET_TABLE, String.class, OmBucketInfo.class,
-            cleanupPolicy);
+            cacheType);
 
     checkTableStatus(bucketTable, BUCKET_TABLE);
 
@@ -746,6 +745,16 @@
     return result;
   }
 
+  public Iterator<Map.Entry<CacheKey<String>, CacheValue<OmBucketInfo>>>
+      getBucketIterator(){
+    return bucketTable.cacheIterator();
+  }
+
+  public TableIterator<String, ? extends KeyValue<String, OmKeyInfo>>
+      getKeyIterator(){
+    return keyTable.iterator();
+  }
+
   @Override
   public List<OmKeyInfo> listKeys(String volumeName, String bucketName,
       String startKey, String keyPrefix, int maxKeys) throws IOException {
@@ -779,7 +788,8 @@
       skipStartKey = true;
     } else {
       // This allows us to seek directly to the first key with the right prefix.
-      seekKey = getOzoneKey(volumeName, bucketName, keyPrefix);
+      seekKey = getOzoneKey(volumeName, bucketName,
+          StringUtil.isNotBlank(keyPrefix) ? keyPrefix : OM_KEY_PREFIX);
     }
 
     String seekPrefix;
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
index df29e0e..a0e9ef8 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
@@ -25,12 +25,12 @@
 import java.io.OutputStreamWriter;
 import java.net.InetAddress;
 import java.net.InetSocketAddress;
+import java.nio.charset.StandardCharsets;
 import java.nio.file.Files;
 import java.nio.file.Path;
 import java.nio.file.StandardCopyOption;
 import java.security.KeyPair;
 import java.security.PrivateKey;
-import java.security.PrivilegedExceptionAction;
 import java.security.PublicKey;
 import java.security.cert.CertificateException;
 import java.util.ArrayList;
@@ -56,8 +56,6 @@
 import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Trash;
-import org.apache.hadoop.fs.TrashPolicy;
 import org.apache.hadoop.hdds.HddsConfigKeys;
 import org.apache.hadoop.hdds.HddsUtils;
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
@@ -183,6 +181,8 @@
 import org.apache.hadoop.util.KMSUtil;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.ShutdownHookManager;
+import org.apache.hadoop.util.Time;
+
 import com.fasterxml.jackson.databind.ObjectMapper;
 import com.fasterxml.jackson.databind.ObjectReader;
 import com.fasterxml.jackson.databind.ObjectWriter;
@@ -216,8 +216,10 @@
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE_DEFAULT;
 import static org.apache.hadoop.ozone.OzoneConsts.DB_TRANSIENT_MARKER;
+import static org.apache.hadoop.ozone.OzoneConsts.DEFAULT_OM_UPDATE_ID;
 import static org.apache.hadoop.ozone.OzoneConsts.OM_METRICS_FILE;
 import static org.apache.hadoop.ozone.OzoneConsts.OM_METRICS_TEMP_FILE;
+import static org.apache.hadoop.ozone.OzoneConsts.OM_RATIS_SNAPSHOT_DIR;
 import static org.apache.hadoop.ozone.OzoneConsts.RPC_PORT;
 import static org.apache.hadoop.ozone.OzoneConsts.TRANSACTION_INFO_KEY;
 import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
@@ -239,10 +241,9 @@
 import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.KEY_NOT_FOUND;
 import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.TOKEN_ERROR_OTHER;
 import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.VOLUME_LOCK;
+import static org.apache.hadoop.ozone.om.ratis.OzoneManagerRatisServer.RaftServerStatus.LEADER_AND_READY;
 import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneManagerService.newReflectiveBlockingService;
 import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.PrepareStatusResponse.PrepareStatus;
-
-import org.apache.hadoop.util.Time;
 import org.apache.ratis.proto.RaftProtos.RaftPeerRole;
 import org.apache.ratis.server.protocol.TermIndex;
 import org.apache.ratis.util.ExitUtils;
@@ -354,8 +355,8 @@
   private Thread emptier;
 
   @SuppressWarnings("methodlength")
-  private OzoneManager(OzoneConfiguration conf)
-      throws IOException, AuthenticationException {
+  private OzoneManager(OzoneConfiguration conf) throws IOException,
+      AuthenticationException {
     super(OzoneVersionInfo.OZONE_VERSION_INFO);
     Preconditions.checkNotNull(conf);
     configuration = conf;
@@ -468,7 +469,7 @@
     addS3GVolumeToDB();
 
     this.omRatisSnapshotInfo = new OMRatisSnapshotInfo();
-    initializeRatisServer();
+
     if (isRatisEnabled) {
       // Create Ratis storage dir
       String omRatisDirectory =
@@ -478,16 +479,30 @@
             " must be defined.");
       }
       OmUtils.createOMDir(omRatisDirectory);
+
       // Create Ratis snapshot dir
       omRatisSnapshotDir = OmUtils.createOMDir(
           OzoneManagerRatisServer.getOMRatisSnapshotDirectory(configuration));
 
+      // Before starting ratis server, check if previous installation has
+      // snapshot directory in Ratis storage directory. if yes, move it to
+      // new snapshot directory.
+
+      File snapshotDir = new File(omRatisDirectory, OM_RATIS_SNAPSHOT_DIR);
+
+      if (snapshotDir.isDirectory()) {
+        FileUtils.moveDirectory(snapshotDir.toPath(),
+            omRatisSnapshotDir.toPath());
+      }
+
       if (peerNodes != null && !peerNodes.isEmpty()) {
         this.omSnapshotProvider = new OzoneManagerSnapshotProvider(
             configuration, omRatisSnapshotDir, peerNodes);
       }
     }
 
+    initializeRatisServer();
+
     metrics = OMMetrics.create();
     omClientProtocolMetrics = ProtocolMessageMetrics
         .create("OmClientProtocol", "Ozone Manager RPC endpoint",
@@ -657,7 +672,7 @@
           getTempMetricsStorageFile().getParentFile().toPath());
       try (BufferedWriter writer = new BufferedWriter(
           new OutputStreamWriter(new FileOutputStream(
-              getTempMetricsStorageFile()), "UTF-8"))) {
+              getTempMetricsStorageFile()), StandardCharsets.UTF_8))) {
         OmMetricsInfo metricsInfo = new OmMetricsInfo();
         metricsInfo.setNumKeys(metrics.getNumKeys());
         WRITER.writeValue(writer, metricsInfo);
@@ -1205,9 +1220,6 @@
 
     omRpcServer.start();
     isOmRpcServerRunning = true;
-
-    // TODO: Start this thread only on the leader node.
-    //  Should be fixed after HDDS-4451.
     startTrashEmptier(configuration);
 
     registerMXBean();
@@ -1266,8 +1278,6 @@
     omRpcServer.start();
     isOmRpcServerRunning = true;
 
-    // TODO: Start this thread only on the leader node.
-    //  Should be fixed after HDDS-4451.
     startTrashEmptier(configuration);
     registerMXBean();
 
@@ -1293,24 +1303,8 @@
       throw new IOException("Cannot start trash emptier with negative interval."
               + " Set " + FS_TRASH_INTERVAL_KEY + " to a positive value.");
     }
-
-    // configuration for the FS instance that  points to a root OFS uri.
-    // This will ensure that it will cover all volumes and buckets
-    Configuration fsconf = new Configuration();
-    String rootPath = String.format("%s://%s/",
-            OzoneConsts.OZONE_OFS_URI_SCHEME, conf.get(OZONE_OM_ADDRESS_KEY));
-
-    fsconf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, rootPath);
-    FileSystem fs = SecurityUtil.doAsLoginUser(
-            new PrivilegedExceptionAction<FileSystem>() {
-          @Override
-          public FileSystem run() throws IOException {
-            return FileSystem.get(fsconf);
-          }
-        });
-    conf.setClass("fs.trash.classname", TrashPolicyOzone.class,
-        TrashPolicy.class);
-    this.emptier = new Thread(new Trash(fs, conf).
+    FileSystem fs = new TrashOzoneFileSystem(this);
+    this.emptier = new Thread(new OzoneTrash(fs, conf, this).
       getEmptier(), "Trash Emptier");
     this.emptier.setDaemon(true);
     this.emptier.start();
@@ -1419,12 +1413,7 @@
       if (httpServer != null) {
         httpServer.stop();
       }
-      // TODO:Also stop this thread if an OM switches from leader to follower.
-      //  Should be fixed after HDDS-4451.
-      if (this.emptier != null) {
-        emptier.interrupt();
-        emptier = null;
-      }
+      stopTrashEmptier();
       metadataManager.stop();
       metrics.unRegister();
       omClientProtocolMetrics.unregister();
@@ -1826,7 +1815,9 @@
             obj.getKeyName());
         throw new OMException("User " + context.getClientUgi().getUserName() +
             " doesn't have " + context.getAclRights() +
-            " permission to access " + obj.getResourceType(),
+            " permission to access " + obj.getResourceType() + " " +
+            obj.getVolumeName() + " " + obj.getBucketName() + " " +
+            obj.getKeyName(),
             ResultCodes.PERMISSION_DENIED);
       }
       return false;
@@ -1874,12 +1865,12 @@
    * Changes the Quota on a volume.
    *
    * @param volume - Name of the volume.
-   * @param quotaInCounts - Volume quota in counts.
+   * @param quotaInNamespace - Volume quota in counts.
    * @param quotaInBytes - Volume quota in bytes.
    * @throws IOException
    */
   @Override
-  public void setQuota(String volume, long quotaInCounts,
+  public void setQuota(String volume, long quotaInNamespace,
                        long quotaInBytes) throws IOException {
     throw new UnsupportedOperationException("OzoneManager does not require " +
         "this to be implemented. As this requests use a new approach");
@@ -3317,6 +3308,7 @@
       // During stopServices, if KeyManager was stopped successfully and
       // OMMetadataManager stop failed, we should restart the KeyManager.
       keyManager.start(configuration);
+      startTrashEmptier(configuration);
       throw e;
     }
 
@@ -3407,6 +3399,14 @@
     keyManager.stop();
     stopSecretManager();
     metadataManager.stop();
+    stopTrashEmptier();
+  }
+
+  private void stopTrashEmptier() {
+    if (this.emptier != null) {
+      emptier.interrupt();
+      emptier = null;
+    }
   }
 
   /**
@@ -3475,6 +3475,7 @@
     // Restart required services
     metadataManager.start(configuration);
     keyManager.start(configuration);
+    startTrashEmptier(configuration);
 
     // Set metrics and start metrics back ground thread
     metrics.setNumVolumes(metadataManager.countRowsInTable(metadataManager
@@ -3536,21 +3537,16 @@
   public long getMaxUserVolumeCount() {
     return maxUserVolumeCount;
   }
-
   /**
-   * Checks the Leader status of OM Ratis Server.
-   * Note that this status has a small window of error. It should not be used
-   * to determine the absolute leader status.
-   * If it is the leader, the role status is cached till Ratis server
-   * notifies of leader change. If it is not leader, the role information is
-   * retrieved through by submitting a GroupInfoRequest to Ratis server.
-   * <p>
-   * If ratis is not enabled, then it always returns true.
+   * Return true, if the current OM node is leader and in ready state to
+   * process the requests.
    *
-   * @return Return true if this node is the leader, false otherwsie.
+   * If ratis is not enabled, then it always returns true.
+   * @return
    */
-  public boolean isLeader() {
-    return isRatisEnabled ? omRatisServer.isLeader() : true;
+  public boolean isLeaderReady() {
+    return isRatisEnabled ?
+        omRatisServer.checkLeaderStatus() == LEADER_AND_READY : true;
   }
 
   /**
@@ -3742,8 +3738,7 @@
 
       // Add volume and user info to DB and cache.
 
-      OmVolumeArgs omVolumeArgs = createS3VolumeInfo(s3VolumeName,
-          transactionID, objectID);
+      OmVolumeArgs omVolumeArgs = createS3VolumeInfo(s3VolumeName, objectID);
 
       String dbUserKey = metadataManager.getUserKey(userName);
       PersistedUserVolumeInfo userVolumeInfo =
@@ -3777,14 +3772,19 @@
     }
   }
 
-  private OmVolumeArgs createS3VolumeInfo(String s3Volume, long transactionID,
+  private OmVolumeArgs createS3VolumeInfo(String s3Volume,
       long objectID) throws IOException {
     String userName = UserGroupInformation.getCurrentUser().getShortUserName();
     long time = Time.now();
 
+    // We need to set the updateID to DEFAULT_OM_UPDATE_ID, because when
+    // acl op on S3v volume during updateID check it will fail if we have a
+    // value with maximum transactionID. Because updateID checks if new
+    // new updateID is greater than previous updateID, otherwise it fails.
+
     OmVolumeArgs.Builder omVolumeArgs = new OmVolumeArgs.Builder()
         .setVolume(s3Volume)
-        .setUpdateID(transactionID)
+        .setUpdateID(DEFAULT_OM_UPDATE_ID)
         .setObjectID(objectID)
         .setCreationTime(time)
         .setModificationTime(time)
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneTrash.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneTrash.java
new file mode 100644
index 0000000..310c301
--- /dev/null
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneTrash.java
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Trash;
+import org.apache.hadoop.fs.TrashPolicy;
+
+import java.io.IOException;
+
+/**
+ * OzoneTrash which takes an OM as parameter .
+ */
+public class OzoneTrash extends Trash {
+
+  private TrashPolicy trashPolicy;
+  public OzoneTrash(FileSystem fs, Configuration conf, OzoneManager om)
+      throws IOException {
+    super(fs, conf);
+    this.trashPolicy = new TrashPolicyOzone(fs, conf, om);
+  }
+  @Override
+  public Runnable getEmptier() throws IOException {
+    return this.trashPolicy.getEmptier();
+  }
+
+}
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/TrashOzoneFileSystem.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/TrashOzoneFileSystem.java
new file mode 100644
index 0000000..9c9da4c
--- /dev/null
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/TrashOzoneFileSystem.java
@@ -0,0 +1,457 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om;
+
+import com.google.common.base.Preconditions;
+import com.google.protobuf.RpcController;
+import com.google.protobuf.ServiceException;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.hdds.utils.db.TableIterator;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.OFSPath;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.Progressable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Iterator;
+import java.util.UUID;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
+import static org.apache.hadoop.ozone.om.helpers.OzoneFSUtils.addTrailingSlashIfNeeded;
+import static org.apache.hadoop.ozone.om.helpers.OzoneFSUtils.pathToKey;
+import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type.DeleteKeys;
+
+/**
+ * FileSystem to be used by the Trash Emptier.
+ * Only the apis used by the trash emptier are implemented.
+ */
+public class TrashOzoneFileSystem extends FileSystem {
+
+  private static final RpcController NULL_RPC_CONTROLLER = null;
+
+  private static final int OZONE_FS_ITERATE_BATCH_SIZE = 100;
+
+  private OzoneManager ozoneManager;
+
+  private String userName;
+
+  private String ofsPathPrefix;
+
+  private static final Logger LOG =
+      LoggerFactory.getLogger(TrashOzoneFileSystem.class);
+
+  public TrashOzoneFileSystem(OzoneManager ozoneManager) throws IOException {
+    this.ozoneManager = ozoneManager;
+    this.userName =
+          UserGroupInformation.getCurrentUser().getShortUserName();
+  }
+
+  @Override
+  public URI getUri() {
+    throw new UnsupportedOperationException(
+        "fs.getUri() not implemented in TrashOzoneFileSystem");
+  }
+
+  @Override
+  public FSDataInputStream open(Path path, int i) {
+    throw new UnsupportedOperationException(
+        "fs.open() not implemented in TrashOzoneFileSystem");
+  }
+
+  @Override
+  public FSDataOutputStream create(Path path,
+      FsPermission fsPermission,
+      boolean b, int i, short i1,
+      long l, Progressable progressable){
+    throw new UnsupportedOperationException(
+        "fs.create() not implemented in TrashOzoneFileSystem");
+  }
+
+  @Override
+  public FSDataOutputStream append(Path path, int i,
+      Progressable progressable) {
+    throw new UnsupportedOperationException(
+        "fs.append() not implemented in TrashOzoneFileSystem");
+  }
+
+  @Override
+  public boolean rename(Path src, Path dst) throws IOException {
+    LOG.trace("Src:" + src + "Dst:" + dst);
+    // check whether the src and dst belong to the same bucket & trashroot.
+    OFSPath srcPath = new OFSPath(src);
+    OFSPath dstPath = new OFSPath(dst);
+    Preconditions.checkArgument(srcPath.getBucketName().
+        equals(dstPath.getBucketName()));
+    Preconditions.checkArgument(srcPath.getTrashRoot().
+        toString().equals(dstPath.getTrashRoot().toString()));
+    RenameIterator iterator = new RenameIterator(src, dst);
+    iterator.iterate();
+    return true;
+  }
+
+  @Override
+  public boolean delete(Path path, boolean b) throws IOException {
+    DeleteIterator iterator = new DeleteIterator(path, true);
+    iterator.iterate();
+    return true;
+  }
+
+  @Override
+  public FileStatus[] listStatus(Path path) throws  IOException {
+    List<FileStatus> fileStatuses = new ArrayList<>();
+    OmKeyArgs keyArgs = constructOmKeyArgs(path);
+    List<OzoneFileStatus> list = ozoneManager.
+        listStatus(keyArgs, false, null, Integer.MAX_VALUE);
+    for (OzoneFileStatus status : list) {
+      FileStatus fileStatus = convertToFileStatus(status);
+      fileStatuses.add(fileStatus);
+    }
+    return fileStatuses.toArray(new FileStatus[0]);
+  }
+
+
+  /**
+   * converts OzoneFileStatus object to FileStatus.
+   */
+  private FileStatus convertToFileStatus(OzoneFileStatus status) {
+    Path temp = new Path(ofsPathPrefix +
+        OZONE_URI_DELIMITER + status.getKeyInfo().getKeyName());
+    return new FileStatus(
+        status.getKeyInfo().getDataSize(),
+        status.isDirectory(),
+        status.getKeyInfo().getFactor().getNumber(),
+        status.getBlockSize(),
+        status.getKeyInfo().getModificationTime(),
+        temp
+    );
+  }
+
+  @Override
+  public void setWorkingDirectory(Path path) {
+    throw new UnsupportedOperationException(
+        "fs.setWorkingDirectory() not implemented in TrashOzoneFileSystem");
+  }
+
+  @Override
+  public Path getWorkingDirectory() {
+    throw new UnsupportedOperationException(
+        "fs.getWorkingDirectory() not implemented in TrashOzoneFileSystem");
+  }
+
+  @Override
+  public boolean mkdirs(Path path,
+      FsPermission fsPermission) {
+    throw new UnsupportedOperationException(
+        "fs.mkdirs() not implemented in TrashOzoneFileSystem");
+  }
+
+
+  @Override
+  public FileStatus getFileStatus(Path path) throws IOException {
+    OmKeyArgs keyArgs = constructOmKeyArgs(path);
+    OzoneFileStatus ofs = ozoneManager.getKeyManager().getFileStatus(keyArgs);
+    FileStatus fileStatus = convertToFileStatus(ofs);
+    return fileStatus;
+  }
+
+  private OmKeyArgs constructOmKeyArgs(Path path) {
+    OFSPath ofsPath = new OFSPath(path);
+    String volume = ofsPath.getVolumeName();
+    String bucket = ofsPath.getBucketName();
+    String key = ofsPath.getKeyName();
+    OmKeyArgs keyArgs = new OmKeyArgs.Builder()
+        .setVolumeName(volume)
+        .setBucketName(bucket)
+        .setKeyName(key)
+        .build();
+    this.ofsPathPrefix = OZONE_URI_DELIMITER +
+        volume + OZONE_URI_DELIMITER + bucket;
+    return keyArgs;
+  }
+
+  @Override
+  public Collection<FileStatus> getTrashRoots(boolean allUsers) {
+    Preconditions.checkArgument(allUsers);
+    Iterator<Map.Entry<CacheKey<String>,
+        CacheValue<OmBucketInfo>>> bucketIterator =
+        ozoneManager.getMetadataManager().getBucketIterator();
+    List<FileStatus> ret = new ArrayList<>();
+    while (bucketIterator.hasNext()){
+      Map.Entry<CacheKey<String>, CacheValue<OmBucketInfo>> entry =
+          bucketIterator.next();
+      OmBucketInfo omBucketInfo = entry.getValue().getCacheValue();
+      Path volumePath = new Path(OZONE_URI_DELIMITER,
+          omBucketInfo.getVolumeName());
+      Path bucketPath = new Path(volumePath, omBucketInfo.getBucketName());
+      Path trashRoot = new Path(bucketPath, FileSystem.TRASH_PREFIX);
+      try {
+        if (exists(trashRoot)) {
+          FileStatus[] list = this.listStatus(trashRoot);
+          for (FileStatus candidate : list) {
+            if (exists(candidate.getPath()) && candidate.isDirectory()) {
+              ret.add(candidate);
+            }
+          }
+        }
+      } catch (Exception e){
+        LOG.error("Couldn't perform fs operation " +
+            "fs.listStatus()/fs.exists()" + e);
+      }
+    }
+    return ret;
+  }
+
+  @Override
+  public boolean exists(Path f) throws IOException {
+    try {
+      this.getFileStatus(f);
+      return true;
+    } catch (FileNotFoundException var3) {
+      LOG.info("Couldn't execute getFileStatus()"  + var3);
+      return false;
+    }
+  }
+
+  private abstract class OzoneListingIterator {
+    private final Path path;
+    private final FileStatus status;
+    private String pathKey;
+    private TableIterator<String, ? extends Table.KeyValue<String, OmKeyInfo>>
+          keyIterator;
+
+    OzoneListingIterator(Path path)
+          throws IOException {
+      this.path = path;
+      this.status = getFileStatus(path);
+      this.pathKey = pathToKey(path);
+      if (status.isDirectory()) {
+        this.pathKey = addTrailingSlashIfNeeded(pathKey);
+      }
+      keyIterator = ozoneManager.getMetadataManager().getKeyIterator();
+    }
+
+      /**
+       * The output of processKey determines if further iteration through the
+       * keys should be done or not.
+       *
+       * @return true if we should continue iteration of keys, false otherwise.
+       * @throws IOException
+       */
+    abstract boolean processKeyPath(List<String> keyPathList)
+          throws IOException;
+
+      /**
+       * Iterates through all the keys prefixed with the input path's key and
+       * processes the key though processKey().
+       * If for any key, the processKey() returns false, then the iteration is
+       * stopped and returned with false indicating that all the keys could not
+       * be processed successfully.
+       *
+       * @return true if all keys are processed successfully, false otherwise.
+       * @throws IOException
+       */
+    boolean iterate() throws IOException {
+      LOG.trace("Iterating path: {}", path);
+      List<String> keyPathList = new ArrayList<>();
+      if (status.isDirectory()) {
+        LOG.trace("Iterating directory: {}", pathKey);
+        OFSPath ofsPath = new OFSPath(pathKey);
+        String ofsPathprefix =
+            ofsPath.getNonKeyPathNoPrefixDelim() + OZONE_URI_DELIMITER;
+        while (keyIterator.hasNext()) {
+          Table.KeyValue< String, OmKeyInfo > kv = keyIterator.next();
+          String keyPath = ofsPathprefix + kv.getValue().getKeyName();
+          LOG.trace("iterating key path: {}", keyPath);
+          if (!kv.getValue().getKeyName().equals("")
+              && kv.getKey().startsWith("/" + pathKey)) {
+            keyPathList.add(keyPath);
+          }
+          if (keyPathList.size() >= OZONE_FS_ITERATE_BATCH_SIZE) {
+            if (!processKeyPath(keyPathList)) {
+              return false;
+            } else {
+              keyPathList.clear();
+            }
+          }
+        }
+        if (keyPathList.size() > 0) {
+          if (!processKeyPath(keyPathList)) {
+            return false;
+          }
+        }
+        return true;
+      } else {
+        LOG.trace("iterating file: {}", path);
+        keyPathList.add(pathKey);
+        return processKeyPath(keyPathList);
+      }
+    }
+
+    FileStatus getStatus() {
+      return status;
+    }
+  }
+
+  private class RenameIterator extends OzoneListingIterator {
+    private final String srcPath;
+    private final String dstPath;
+
+    RenameIterator(Path srcPath, Path dstPath)
+        throws IOException {
+      super(srcPath);
+      this.srcPath = pathToKey(srcPath);
+      this.dstPath = pathToKey(dstPath);
+      LOG.trace("rename from:{} to:{}", this.srcPath, this.dstPath);
+    }
+
+    @Override
+    boolean processKeyPath(List<String> keyPathList) {
+      for (String keyPath : keyPathList) {
+        String newPath = dstPath.concat(keyPath.substring(srcPath.length()));
+        OFSPath src = new OFSPath(keyPath);
+        OFSPath dst = new OFSPath(newPath);
+
+        OzoneManagerProtocolProtos.OMRequest omRequest =
+            getRenameKeyRequest(src, dst);
+        try {
+          ozoneManager.getOmServerProtocol().
+              submitRequest(NULL_RPC_CONTROLLER, omRequest);
+        } catch (ServiceException e) {
+          LOG.error("Couldn't send rename request.");
+        }
+
+      }
+      return true;
+    }
+
+    private OzoneManagerProtocolProtos.OMRequest
+        getRenameKeyRequest(
+        OFSPath src, OFSPath dst) {
+      String volumeName = src.getVolumeName();
+      String bucketName = src.getBucketName();
+      String keyName = src.getKeyName();
+
+      OzoneManagerProtocolProtos.KeyArgs keyArgs =
+          OzoneManagerProtocolProtos.KeyArgs.newBuilder()
+              .setKeyName(keyName)
+              .setVolumeName(volumeName)
+              .setBucketName(bucketName)
+              .build();
+      String toKeyName = dst.getKeyName();
+      OzoneManagerProtocolProtos.RenameKeyRequest renameKeyRequest =
+          OzoneManagerProtocolProtos.RenameKeyRequest.newBuilder()
+              .setKeyArgs(keyArgs)
+              .setToKeyName(toKeyName)
+              .build();
+      OzoneManagerProtocolProtos.OMRequest omRequest =
+          OzoneManagerProtocolProtos.OMRequest.newBuilder()
+              .setClientId(UUID.randomUUID().toString())
+              .setRenameKeyRequest(renameKeyRequest)
+              .setCmdType(OzoneManagerProtocolProtos.Type.RenameKey).build();
+      return omRequest;
+    }
+  }
+
+  private class DeleteIterator extends OzoneListingIterator {
+    final private boolean recursive;
+
+
+    DeleteIterator(Path f, boolean recursive)
+        throws IOException {
+      super(f);
+      this.recursive = recursive;
+      if (getStatus().isDirectory()
+          && !this.recursive
+          && listStatus(f).length != 0) {
+        throw new PathIsNotEmptyDirectoryException(f.toString());
+      }
+    }
+
+    @Override
+    boolean processKeyPath(List<String> keyPathList) {
+      LOG.trace("Deleting keys: {}", keyPathList);
+
+      List<String> keyList = keyPathList.stream()
+          .map(p -> new OFSPath(p).getKeyName())
+          .collect(Collectors.toList());
+
+      // since KeyPathList is a list obtained by iterating through KeyIterator,
+      // all the keys belong to the same volume and bucket.
+      // here we are just reading the volume and bucket from the first entry.
+      if(!keyPathList.isEmpty()){
+        OzoneManagerProtocolProtos.OMRequest omRequest =
+            getDeleteRequest(keyPathList, keyList);
+        try {
+          ozoneManager.getOmServerProtocol().
+              submitRequest(NULL_RPC_CONTROLLER, omRequest);
+        } catch (ServiceException e) {
+          LOG.error("Couldn't send rename request.");
+        }
+        return true;
+      } else {
+        LOG.info("No keys to process");
+        return true;
+      }
+
+    }
+
+    private OzoneManagerProtocolProtos.OMRequest getDeleteRequest(
+        List<String> keyPathList, List<String> keyList) {
+      OFSPath p = new OFSPath(keyPathList.get(0));
+      String volumeName = p.getVolumeName();
+      String bucketName = p.getBucketName();
+      OzoneManagerProtocolProtos.DeleteKeyArgs.Builder deleteKeyArgs =
+          OzoneManagerProtocolProtos.DeleteKeyArgs.newBuilder()
+              .setBucketName(bucketName)
+              .setVolumeName(volumeName);
+      deleteKeyArgs.addAllKeys(keyList);
+      OzoneManagerProtocolProtos.OMRequest omRequest =
+          OzoneManagerProtocolProtos.OMRequest.newBuilder()
+              .setClientId(UUID.randomUUID().toString())
+              .setCmdType(DeleteKeys)
+              .setDeleteKeysRequest(OzoneManagerProtocolProtos
+              .DeleteKeysRequest.newBuilder()
+              .setDeleteKeys(deleteKeyArgs).build()).build();
+      return omRequest;
+    }
+  }
+
+
+
+}
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/TrashPolicyOzone.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/TrashPolicyOzone.java
index c0278bc..7006ea7 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/TrashPolicyOzone.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/TrashPolicyOzone.java
@@ -24,19 +24,29 @@
 import java.text.SimpleDateFormat;
 import java.util.Collection;
 import java.util.Date;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.TrashPolicyDefault;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
-import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.conf.OMClientConfig;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_KEY;
+import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_DEFAULT;
+import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_CHECKPOINT_INTERVAL_KEY;
+import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_CHECKPOINT_INTERVAL_DEFAULT;
+
 /**
  * TrashPolicy for Ozone Specific Trash Operations.Through this implementation
  *  of TrashPolicy ozone-specific trash optimizations are/will be made such as
@@ -61,25 +71,53 @@
 
   private long emptierInterval;
 
+  private Configuration configuration;
+
+  private OzoneManager om;
+
   public TrashPolicyOzone(){
   }
 
-  private TrashPolicyOzone(FileSystem fs, Configuration conf){
-    super.initialize(conf, fs);
+  @Override
+  public void initialize(Configuration conf, FileSystem fs) {
+    this.fs = fs;
+    this.configuration = conf;
+    this.deletionInterval = (long)(conf.getFloat(
+        FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT)
+        * MSECS_PER_MINUTE);
+    this.emptierInterval = (long)(conf.getFloat(
+        FS_TRASH_CHECKPOINT_INTERVAL_KEY, FS_TRASH_CHECKPOINT_INTERVAL_DEFAULT)
+        * MSECS_PER_MINUTE);
+    if (deletionInterval < 0) {
+      LOG.warn("Invalid value {} for deletion interval,"
+          + " deletion interaval can not be negative."
+          + "Changing to default value 0", deletionInterval);
+      this.deletionInterval = 0;
+    }
+  }
+
+  TrashPolicyOzone(FileSystem fs, Configuration conf, OzoneManager om){
+    initialize(conf, fs);
+    this.om = om;
   }
 
   @Override
   public Runnable getEmptier() throws IOException {
-    return new TrashPolicyOzone.Emptier(getConf(), emptierInterval);
+    return new TrashPolicyOzone.Emptier((OzoneConfiguration)configuration,
+        emptierInterval);
   }
 
+
   protected class Emptier implements Runnable {
 
     private Configuration conf;
     // same as checkpoint interval
     private long emptierInterval;
 
-    Emptier(Configuration conf, long emptierInterval) throws IOException {
+
+    private ThreadPoolExecutor executor;
+
+    Emptier(OzoneConfiguration conf, long emptierInterval) throws IOException {
       this.conf = conf;
       this.emptierInterval = emptierInterval;
       if (emptierInterval > deletionInterval || emptierInterval <= 0) {
@@ -90,10 +128,16 @@
             " minutes that is used for deletion instead");
         this.emptierInterval = deletionInterval;
       }
+      int trashEmptierCorePoolSize = conf.getObject(OMClientConfig.class)
+          .getTrashEmptierPoolSize();
       LOG.info("Ozone Manager trash configuration: Deletion interval = "
           + (deletionInterval / MSECS_PER_MINUTE)
           + " minutes, Emptier interval = "
           + (this.emptierInterval / MSECS_PER_MINUTE) + " minutes.");
+      executor = new ThreadPoolExecutor(trashEmptierCorePoolSize,
+          trashEmptierCorePoolSize, 1,
+          TimeUnit.SECONDS, new ArrayBlockingQueue<>(1024),
+          new ThreadPoolExecutor.CallerRunsPolicy());
     }
 
     @Override
@@ -105,8 +149,13 @@
       while (true) {
         now = Time.now();
         end = ceiling(now, emptierInterval);
-        try {                                     // sleep for interval
+        try {
+          // sleep for interval
           Thread.sleep(end - now);
+          // if not leader, thread will always be sleeping
+          if (!om.isLeaderReady()){
+            continue;
+          }
         } catch (InterruptedException e) {
           break;                                  // exit on interrupt
         }
@@ -115,20 +164,24 @@
           now = Time.now();
           if (now >= end) {
             Collection<FileStatus> trashRoots;
-            trashRoots = fs.getTrashRoots(true);      // list all trash dirs
-
-            for (FileStatus trashRoot : trashRoots) {   // dump each trash
+            trashRoots = fs.getTrashRoots(true); // list all trash dirs
+            LOG.debug("Trash root Size: " + trashRoots.size());
+            for (FileStatus trashRoot : trashRoots) {  // dump each trash
+              LOG.info("Trashroot:" + trashRoot.getPath().toString());
               if (!trashRoot.isDirectory()) {
                 continue;
               }
-              try {
-                TrashPolicyOzone trash = new TrashPolicyOzone(fs, conf);
-                trash.deleteCheckpoint(trashRoot.getPath(), false);
-                trash.createCheckpoint(trashRoot.getPath(), new Date(now));
-              } catch (IOException e) {
-                LOG.warn("Trash caught: "+e+". Skipping " +
-                    trashRoot.getPath() + ".");
-              }
+              TrashPolicyOzone trash = new TrashPolicyOzone(fs, conf, om);
+              Runnable task = ()->{
+                try {
+                  trash.deleteCheckpoint(trashRoot.getPath(), false);
+                  trash.createCheckpoint(trashRoot.getPath(),
+                      new Date(Time.now()));
+                } catch (Exception e) {
+                  LOG.info("Unable to checkpoint");
+                }
+              };
+              executor.submit(task);
             }
           }
         } catch (Exception e) {
@@ -139,6 +192,13 @@
         fs.close();
       } catch(IOException e) {
         LOG.warn("Trash cannot close FileSystem: ", e);
+      } finally {
+        executor.shutdown();
+        try {
+          executor.awaitTermination(60, TimeUnit.SECONDS);
+        } catch (InterruptedException e) {
+          LOG.error("Error attempting to shutdown");
+        }
       }
     }
 
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/codec/OMDBDefinition.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/codec/OMDBDefinition.java
index 1302922..b1c5096 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/codec/OMDBDefinition.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/codec/OMDBDefinition.java
@@ -22,6 +22,7 @@
 import org.apache.hadoop.hdds.utils.db.DBDefinition;
 import org.apache.hadoop.hdds.utils.db.LongCodec;
 import org.apache.hadoop.hdds.utils.db.StringCodec;
+import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.om.OMConfigKeys;
 import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
@@ -44,7 +45,7 @@
   public static final DBColumnFamilyDefinition<String, RepeatedOmKeyInfo>
             DELETED_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "deletedTable",
+                    OmMetadataManagerImpl.DELETED_TABLE,
                     String.class,
                     new StringCodec(),
                     RepeatedOmKeyInfo.class,
@@ -54,7 +55,7 @@
             OzoneManagerStorageProtos.PersistedUserVolumeInfo>
             USER_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "userTable",
+                    OmMetadataManagerImpl.USER_TABLE,
                     String.class,
                     new StringCodec(),
                     OzoneManagerStorageProtos.PersistedUserVolumeInfo.class,
@@ -63,7 +64,7 @@
   public static final DBColumnFamilyDefinition<String, OmVolumeArgs>
             VOLUME_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "volumeTable",
+                    OmMetadataManagerImpl.VOLUME_TABLE,
                     String.class,
                     new StringCodec(),
                     OmVolumeArgs.class,
@@ -72,7 +73,7 @@
   public static final DBColumnFamilyDefinition<String, OmKeyInfo>
             OPEN_KEY_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "openKeyTable",
+                    OmMetadataManagerImpl.OPEN_KEY_TABLE,
                     String.class,
                     new StringCodec(),
                     OmKeyInfo.class,
@@ -81,7 +82,7 @@
   public static final DBColumnFamilyDefinition<String, OmKeyInfo>
             KEY_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "keyTable",
+                    OmMetadataManagerImpl.KEY_TABLE,
                     String.class,
                     new StringCodec(),
                     OmKeyInfo.class,
@@ -90,7 +91,7 @@
   public static final DBColumnFamilyDefinition<String, OmBucketInfo>
             BUCKET_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "bucketTable",
+                    OmMetadataManagerImpl.BUCKET_TABLE,
                     String.class,
                     new StringCodec(),
                     OmBucketInfo.class,
@@ -99,7 +100,7 @@
   public static final DBColumnFamilyDefinition<String, OmMultipartKeyInfo>
             MULTIPART_INFO_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "multipartInfoTable",
+                    OmMetadataManagerImpl.MULTIPARTINFO_TABLE,
                     String.class,
                     new StringCodec(),
                     OmMultipartKeyInfo.class,
@@ -108,7 +109,7 @@
   public static final DBColumnFamilyDefinition<String, OmPrefixInfo>
             PREFIX_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "prefixTable",
+                    OmMetadataManagerImpl.PREFIX_TABLE,
                     String.class,
                     new StringCodec(),
                     OmPrefixInfo.class,
@@ -117,7 +118,7 @@
   public static final DBColumnFamilyDefinition<OzoneTokenIdentifier, Long>
             DTOKEN_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "dTokenTable",
+                    OmMetadataManagerImpl.DELEGATION_TOKEN_TABLE,
                     OzoneTokenIdentifier.class,
                     new TokenIdentifierCodec(),
                     Long.class,
@@ -126,25 +127,24 @@
   public static final DBColumnFamilyDefinition<String, S3SecretValue>
             S3_SECRET_TABLE =
             new DBColumnFamilyDefinition<>(
-                    "s3SecretTable",
+                    OmMetadataManagerImpl.S3_SECRET_TABLE,
                     String.class,
                     new StringCodec(),
                     S3SecretValue.class,
                     new S3SecretValueCodec());
 
   public static final DBColumnFamilyDefinition<String, OMTransactionInfo>
-      TRANSACTION_INFO_TABLE =
-      new DBColumnFamilyDefinition<>(
-          OmMetadataManagerImpl.TRANSACTION_INFO_TABLE,
-          String.class,
-          new StringCodec(),
-          OMTransactionInfo.class,
-          new OMTransactionInfoCodec());
-
+            TRANSACTION_INFO_TABLE =
+            new DBColumnFamilyDefinition<>(
+                    OmMetadataManagerImpl.TRANSACTION_INFO_TABLE,
+                    String.class,
+                    new StringCodec(),
+                    OMTransactionInfo.class,
+                    new OMTransactionInfoCodec());
 
   @Override
   public String getName() {
-    return "om.db";
+    return OzoneConsts.OM_DB_NAME;
   }
 
   @Override
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
index 75bfd7e..bd5b870 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ha/OMHANodeDetails.java
@@ -182,15 +182,12 @@
         }
       }
       if (found == 1) {
-        LOG.debug("Found one matching OM address with service ID: {} and node" +
-            " ID: {}", localOMServiceId, localOMNodeId);
 
         LOG.info("Found matching OM address with OMServiceId: {}, " +
                 "OMNodeId: {}, RPC Address: {} and Ratis port: {}",
             localOMServiceId, localOMNodeId,
             NetUtils.getHostPortString(localRpcAddress), localRatisPort);
 
-
         setOMNodeSpecificConfigs(conf, localOMServiceId, localOMNodeId);
         return new OMHANodeDetails(getHAOMNodeDetails(conf, localOMServiceId,
             localOMNodeId, localRpcAddress, localRatisPort), peerNodesList);
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
index 638b1a3..5c880e9 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
@@ -55,10 +55,6 @@
 import org.apache.ratis.util.ExitUtils;
 
 import static org.apache.hadoop.ozone.OzoneConsts.TRANSACTION_INFO_KEY;
-import static org.apache.hadoop.ozone.OzoneConsts.BUCKET;
-import static org.apache.hadoop.ozone.OzoneConsts.VOLUME;
-import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type.DeleteBucket;
-import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type.DeleteVolume;
 
 /**
  * This class implements DoubleBuffer implementation of OMClientResponse's. In
@@ -261,7 +257,7 @@
                       return null;
                     });
 
-                setCleanupEpoch(entry, cleanupEpochs);
+                addCleanupEntry(entry, cleanupEpochs);
 
               } catch (IOException ex) {
                 // During Adding to RocksDB batch entry got an exception.
@@ -370,34 +366,6 @@
     }
   }
 
-  /**
-   * Set cleanup epoch for the DoubleBufferEntry.
-   * @param entry
-   * @param cleanupEpochs
-   */
-  private void setCleanupEpoch(DoubleBufferEntry entry, Map<String,
-      List<Long>> cleanupEpochs) {
-    // Add epochs depending on operated tables. In this way
-    // cleanup will be called only when required.
-
-    // As bucket and volume table is full cache add cleanup
-    // epochs only when request is delete to cleanup deleted
-    // entries.
-
-    String opName =
-        entry.getResponse().getOMResponse().getCmdType().name();
-
-    if (opName.toLowerCase().contains(VOLUME) ||
-        opName.toLowerCase().contains(BUCKET)) {
-      if (DeleteBucket.name().equals(opName)
-          || DeleteVolume.name().equals(opName)) {
-        addCleanupEntry(entry, cleanupEpochs);
-      }
-    } else {
-      addCleanupEntry(entry, cleanupEpochs);
-    }
-  }
-
 
   private void addCleanupEntry(DoubleBufferEntry entry, Map<String,
       List<Long>> cleanupEpochs) {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
index f552f48..36ec1e1 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
@@ -26,13 +26,9 @@
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
-import java.util.Optional;
+import java.util.Map;
 import java.util.UUID;
-import java.util.concurrent.Executors;
-import java.util.concurrent.ScheduledExecutorService;
 import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicLong;
-import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
@@ -60,11 +56,7 @@
 import org.apache.ratis.conf.RaftProperties;
 import org.apache.ratis.grpc.GrpcConfigKeys;
 import org.apache.ratis.netty.NettyConfigKeys;
-import org.apache.ratis.proto.RaftProtos.RaftPeerRole;
-import org.apache.ratis.proto.RaftProtos.RoleInfoProto;
 import org.apache.ratis.protocol.ClientId;
-import org.apache.ratis.protocol.GroupInfoReply;
-import org.apache.ratis.protocol.GroupInfoRequest;
 import org.apache.ratis.protocol.exceptions.LeaderNotReadyException;
 import org.apache.ratis.protocol.exceptions.NotLeaderException;
 import org.apache.ratis.protocol.exceptions.StateMachineException;
@@ -80,7 +72,6 @@
 import org.apache.ratis.server.RaftServer;
 import org.apache.ratis.server.RaftServerConfigKeys;
 import org.apache.ratis.server.protocol.TermIndex;
-import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import org.apache.ratis.util.LifeCycle;
 import org.apache.ratis.util.SizeInBytes;
 import org.apache.ratis.util.StringUtils;
@@ -88,8 +79,12 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
 import static org.apache.hadoop.ipc.RpcConstants.DUMMY_CLIENT_ID;
 import static org.apache.hadoop.ipc.RpcConstants.INVALID_CALL_ID;
+import static org.apache.hadoop.ozone.OzoneConsts.OM_RATIS_SNAPSHOT_DIR;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_HA_PREFIX;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_SNAPSHOT_DIR;
 
 /**
  * Creates a Ratis server endpoint for OM.
@@ -107,20 +102,6 @@
 
   private final OzoneManager ozoneManager;
   private final OzoneManagerStateMachine omStateMachine;
-  private final ClientId clientId = ClientId.randomId();
-
-  private final ScheduledExecutorService scheduledRoleChecker;
-  private long roleCheckInitialDelayMs = 1000; // 1 second default
-  private long roleCheckIntervalMs;
-  private ReentrantReadWriteLock roleCheckLock = new ReentrantReadWriteLock();
-  private Optional<RaftPeerRole> cachedPeerRole = Optional.empty();
-  private Optional<RaftPeerId> cachedLeaderPeerId = Optional.empty();
-
-  private static final AtomicLong CALL_ID_COUNTER = new AtomicLong();
-
-  private static long nextCallId() {
-    return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
-  }
 
   /**
    * Submit request to Ratis server.
@@ -187,11 +168,17 @@
   private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
     Preconditions.checkArgument(Server.getClientId() != DUMMY_CLIENT_ID);
     Preconditions.checkArgument(Server.getCallId() != INVALID_CALL_ID);
-    return new RaftClientRequest(
-        ClientId.valueOf(UUID.nameUUIDFromBytes(Server.getClientId())),
-        server.getId(), raftGroupId, Server.getCallId(),
-        Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
-        RaftClientRequest.writeRequestType(), null);
+    return RaftClientRequest.newBuilder()
+        .setClientId(
+            ClientId.valueOf(UUID.nameUUIDFromBytes(Server.getClientId())))
+        .setServerId(server.getId())
+        .setGroupId(raftGroupId)
+        .setCallId(Server.getCallId())
+        .setMessage(
+            Message.valueOf(
+                OMRatisHelper.convertRequestToByteString(omRequest)))
+        .setType(RaftClientRequest.writeRequestType())
+        .build();
   }
 
   /**
@@ -321,20 +308,6 @@
         .setProperties(serverProperties)
         .setStateMachine(omStateMachine)
         .build();
-
-    // Run a scheduler to check and update the server role on the leader
-    // periodically
-    this.scheduledRoleChecker = Executors.newSingleThreadScheduledExecutor();
-    this.scheduledRoleChecker.scheduleWithFixedDelay(new Runnable() {
-      @Override
-      public void run() {
-        // Run this check only on the leader OM
-        if (cachedPeerRole.isPresent() &&
-            cachedPeerRole.get() == RaftPeerRole.LEADER) {
-          updateServerRole();
-        }
-      }
-    }, roleCheckInitialDelayMs, roleCheckIntervalMs, TimeUnit.MILLISECONDS);
   }
 
   /**
@@ -576,19 +549,6 @@
     RaftServerConfigKeys.Rpc.setSlownessTimeout(properties,
         nodeFailureTimeout);
 
-    TimeUnit roleCheckIntervalUnit =
-        OMConfigKeys.OZONE_OM_RATIS_SERVER_ROLE_CHECK_INTERVAL_DEFAULT
-            .getUnit();
-    long roleCheckIntervalDuration = conf.getTimeDuration(
-        OMConfigKeys.OZONE_OM_RATIS_SERVER_ROLE_CHECK_INTERVAL_KEY,
-        OMConfigKeys.OZONE_OM_RATIS_SERVER_ROLE_CHECK_INTERVAL_DEFAULT
-            .getDuration(), nodeFailureTimeoutUnit);
-    this.roleCheckIntervalMs = TimeDuration.valueOf(
-        roleCheckIntervalDuration, roleCheckIntervalUnit)
-        .toLong(TimeUnit.MILLISECONDS);
-    this.roleCheckInitialDelayMs = leaderElectionMinTimeout
-        .toLong(TimeUnit.MILLISECONDS);
-
     // Set auto trigger snapshot. We don't need to configure auto trigger
     // threshold in OM, as last applied index is flushed during double buffer
     // flush automatically. (But added this property internally, so that this
@@ -607,111 +567,57 @@
 
     RaftServerConfigKeys.Snapshot.setAutoTriggerThreshold(properties,
         snapshotAutoTriggerThreshold);
+
+    createRaftServerProperties(conf, properties);
     return properties;
   }
 
-  /**
-   * Check the cached leader status.
-   * @return true if cached role is Leader, false otherwise.
-   */
-  private boolean checkCachedPeerRoleIsLeader() {
-    this.roleCheckLock.readLock().lock();
-    try {
-      if (cachedPeerRole.isPresent() &&
-          cachedPeerRole.get() == RaftPeerRole.LEADER) {
-        return true;
-      }
-      return false;
-    } finally {
-      this.roleCheckLock.readLock().unlock();
-    }
+  private void createRaftServerProperties(ConfigurationSource ozoneConf,
+      RaftProperties raftProperties) {
+    Map<String, String> ratisServerConf =
+        getOMHAConfigs(ozoneConf);
+    ratisServerConf.forEach((key, val) -> {
+      raftProperties.set(key, val);
+    });
+  }
+
+  private static Map<String, String> getOMHAConfigs(
+      ConfigurationSource configuration) {
+    return configuration.getPropsWithPrefix(OZONE_OM_HA_PREFIX + ".");
   }
 
   /**
-   * Check if the current OM node is the leader node.
-   * @return true if Leader, false otherwise.
+   * Defines RaftServer Status.
    */
-  public boolean isLeader() {
-    if (checkCachedPeerRoleIsLeader()) {
-      return true;
-    }
-
-    // Get the server role from ratis server and update the cached values.
-    updateServerRole();
-
-    // After updating the server role, check and return if leader or not.
-    return checkCachedPeerRoleIsLeader();
+  public enum RaftServerStatus {
+    NOT_LEADER,
+    LEADER_AND_NOT_READY,
+    LEADER_AND_READY;
   }
 
   /**
-   * Get the suggested leader peer id.
-   * @return RaftPeerId of the suggested leader node.
+   * Check Leader status and return the state of the RaftServer.
+   *
+   * @return RaftServerStatus.
    */
-  public Optional<RaftPeerId> getCachedLeaderPeerId() {
-    this.roleCheckLock.readLock().lock();
+  public RaftServerStatus checkLeaderStatus() {
     try {
-      return cachedLeaderPeerId;
-    } finally {
-      this.roleCheckLock.readLock().unlock();
-    }
-  }
-
-  /**
-   * Get the gorup info (peer role and leader peer id) from Ratis server and
-   * update the OM server role.
-   */
-  public void updateServerRole() {
-    try {
-      GroupInfoReply groupInfo = getGroupInfo();
-      RoleInfoProto roleInfoProto = groupInfo.getRoleInfoProto();
-      RaftPeerRole thisNodeRole = roleInfoProto.getRole();
-
-      if (thisNodeRole.equals(RaftPeerRole.LEADER)) {
-        setServerRole(thisNodeRole, raftPeerId);
-
-      } else if (thisNodeRole.equals(RaftPeerRole.FOLLOWER)) {
-        ByteString leaderNodeId = roleInfoProto.getFollowerInfo()
-            .getLeaderInfo().getId().getId();
-        // There may be a chance, here we get leaderNodeId as null. For
-        // example, in 3 node OM Ratis, if 2 OM nodes are down, there will
-        // be no leader.
-        RaftPeerId leaderPeerId = null;
-        if (leaderNodeId != null && !leaderNodeId.isEmpty()) {
-          leaderPeerId = RaftPeerId.valueOf(leaderNodeId);
+      RaftServer.Division division = server.getDivision(raftGroupId);
+      if (division != null) {
+        if (!division.getInfo().isLeader()) {
+          return RaftServerStatus.NOT_LEADER;
+        } else if (division.getInfo().isLeaderReady()) {
+          return RaftServerStatus.LEADER_AND_READY;
+        } else {
+          return RaftServerStatus.LEADER_AND_NOT_READY;
         }
-
-        setServerRole(thisNodeRole, leaderPeerId);
-
-      } else {
-        setServerRole(thisNodeRole, null);
-
       }
-    } catch (IOException e) {
-      LOG.error("Failed to retrieve RaftPeerRole. Setting cached role to " +
-          "{} and resetting leader info.", RaftPeerRole.UNRECOGNIZED, e);
-      setServerRole(null, null);
+    } catch (IOException ioe) {
+      // In this case we return not a leader.
+      LOG.error("Fail to get RaftServer impl and therefore it's not clear " +
+          "whether it's leader. ", ioe);
     }
-  }
-
-  /**
-   * Set the current server role and the leader peer id.
-   */
-  private void setServerRole(RaftPeerRole currentRole,
-      RaftPeerId leaderPeerId) {
-    this.roleCheckLock.writeLock().lock();
-    try {
-      this.cachedPeerRole = Optional.ofNullable(currentRole);
-      this.cachedLeaderPeerId = Optional.ofNullable(leaderPeerId);
-    } finally {
-      this.roleCheckLock.writeLock().unlock();
-    }
-  }
-
-  private GroupInfoReply getGroupInfo() throws IOException {
-    GroupInfoRequest groupInfoRequest = new GroupInfoRequest(clientId,
-        raftPeerId, raftGroupId, nextCallId());
-    GroupInfoReply groupInfo = server.getGroupInfo(groupInfoRequest);
-    return groupInfo;
+    return RaftServerStatus.NOT_LEADER;
   }
 
   public int getServerPort() {
@@ -745,11 +651,15 @@
   }
 
   public static String getOMRatisSnapshotDirectory(ConfigurationSource conf) {
-    String snapshotDir = conf.get(OMConfigKeys.OZONE_OM_RATIS_SNAPSHOT_DIR);
+    String snapshotDir = conf.get(OZONE_OM_RATIS_SNAPSHOT_DIR);
 
+    // If ratis snapshot directory is not set, fall back to ozone.metadata.dir.
     if (Strings.isNullOrEmpty(snapshotDir)) {
-      snapshotDir = Paths.get(getOMRatisDirectory(conf),
-          "snapshot").toString();
+      LOG.warn("{} is not configured. Falling back to {} config",
+          OZONE_OM_RATIS_SNAPSHOT_DIR, OZONE_METADATA_DIRS);
+      File metaDirPath = ServerUtils.getOzoneMetaDirPath(conf);
+      snapshotDir = Paths.get(metaDirPath.getPath(),
+          OM_RATIS_SNAPSHOT_DIR).toString();
     }
     return snapshotDir;
   }
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServerConfig.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServerConfig.java
new file mode 100644
index 0000000..c681289
--- /dev/null
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServerConfig.java
@@ -0,0 +1,54 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.ratis;
+
+import org.apache.hadoop.hdds.conf.Config;
+import org.apache.hadoop.hdds.conf.ConfigGroup;
+import org.apache.hadoop.hdds.conf.ConfigType;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.ratis.server.RaftServerConfigKeys;
+
+import java.time.Duration;
+
+import static org.apache.hadoop.hdds.conf.ConfigTag.OM;
+import static org.apache.hadoop.hdds.conf.ConfigTag.OZONE;
+import static org.apache.hadoop.hdds.conf.ConfigTag.RATIS;
+
+/**
+ * Class which defines OzoneManager Ratis Server config.
+ */
+@ConfigGroup(prefix = OMConfigKeys.OZONE_OM_HA_PREFIX + "."
+    + RaftServerConfigKeys.PREFIX)
+public class OzoneManagerRatisServerConfig {
+
+  @Config(key = "retrycache.expirytime",
+      defaultValue = "300s",
+      type = ConfigType.TIME,
+      tags = {OZONE, OM, RATIS},
+      description = "The timeout duration of the retry cache."
+  )
+  private long retryCacheTimeout = Duration.ofSeconds(300).toMillis();
+
+  public long getRetryCacheTimeout() {
+    return retryCacheTimeout;
+  }
+
+  public void setRetryCacheTimeout(Duration duration) {
+    this.retryCacheTimeout = duration.toMillis();
+  }
+}
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
index baf5000..78e0567 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
@@ -419,7 +419,6 @@
   @Override
   public void notifyNotLeader(Collection<TransactionContext> pendingEntries)
       throws IOException {
-    omRatisServer.updateServerRole();
   }
 
   @Override
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
index ebe9e6c..09f91a5 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
@@ -206,14 +206,22 @@
       // Add default acls from volume.
       addDefaultAcls(omBucketInfo, omVolumeArgs);
 
+      // check namespace quota
+      checkQuotaInNamespace(omVolumeArgs, 1L);
+
+      // update used namespace for volume
+      omVolumeArgs.incrUsedNamespace(1L);
+
       // Update table cache.
+      metadataManager.getVolumeTable().addCacheEntry(new CacheKey<>(volumeKey),
+          new CacheValue<>(Optional.of(omVolumeArgs), transactionLogIndex));
       metadataManager.getBucketTable().addCacheEntry(new CacheKey<>(bucketKey),
           new CacheValue<>(Optional.of(omBucketInfo), transactionLogIndex));
 
       omResponse.setCreateBucketResponse(
           CreateBucketResponse.newBuilder().build());
       omClientResponse = new OMBucketCreateResponse(omResponse.build(),
-          omBucketInfo);
+          omBucketInfo, omVolumeArgs.copyObject());
     } catch (IOException ex) {
       exception = ex;
       omClientResponse = new OMBucketCreateResponse(
@@ -302,6 +310,25 @@
     return bekb.build();
   }
 
+  /**
+   * Check namespace quota.
+   */
+  private void checkQuotaInNamespace(OmVolumeArgs omVolumeArgs,
+      long allocatedNamespace) throws IOException {
+    if (omVolumeArgs.getQuotaInNamespace() > 0) {
+      long usedNamespace = omVolumeArgs.getUsedNamespace();
+      long quotaInNamespace = omVolumeArgs.getQuotaInNamespace();
+      long toUseNamespaceInTotal = usedNamespace + allocatedNamespace;
+      if (quotaInNamespace < toUseNamespaceInTotal) {
+        throw new OMException("The namespace quota of Volume:"
+            + omVolumeArgs.getVolume() + " exceeded: quotaInNamespace: "
+            + quotaInNamespace + " but namespace consumed: "
+            + toUseNamespaceInTotal + ".",
+            OMException.ResultCodes.QUOTA_EXCEEDED);
+      }
+    }
+  }
+
   public boolean checkQuotaBytesValid(OMMetadataManager metadataManager,
       OmVolumeArgs omVolumeArgs, OmBucketInfo omBucketInfo, String volumeKey)
       throws IOException {
@@ -309,10 +336,10 @@
     long volumeQuotaInBytes = omVolumeArgs.getQuotaInBytes();
 
     long totalBucketQuota = 0;
-    if (quotaInBytes == OzoneConsts.QUOTA_RESET || quotaInBytes == 0) {
-      return false;
-    } else if (quotaInBytes > OzoneConsts.QUOTA_RESET) {
+    if (quotaInBytes > 0) {
       totalBucketQuota = quotaInBytes;
+    } else {
+      return false;
     }
 
     List<OmBucketInfo>  bucketList = metadataManager.listBuckets(
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketDeleteRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketDeleteRequest.java
index 33ea990..2990229 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketDeleteRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketDeleteRequest.java
@@ -22,6 +22,7 @@
 import java.util.Map;
 
 import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.slf4j.Logger;
@@ -135,9 +136,23 @@
       omResponse.setDeleteBucketResponse(
           DeleteBucketResponse.newBuilder().build());
 
+      // update used namespace for volume
+      String volumeKey = omMetadataManager.getVolumeKey(volumeName);
+      OmVolumeArgs omVolumeArgs =
+          omMetadataManager.getVolumeTable().getReadCopy(volumeKey);
+      if (omVolumeArgs == null) {
+        throw new OMException("Volume " + volumeName + " is not found",
+            OMException.ResultCodes.VOLUME_NOT_FOUND);
+      }
+      omVolumeArgs.incrUsedNamespace(-1L);
+      // Update table cache.
+      omMetadataManager.getVolumeTable().addCacheEntry(
+          new CacheKey<>(volumeKey),
+          new CacheValue<>(Optional.of(omVolumeArgs), transactionLogIndex));
+
       // Add to double buffer.
       omClientResponse = new OMBucketDeleteResponse(omResponse.build(),
-          volumeName, bucketName);
+          volumeName, bucketName, omVolumeArgs.copyObject());
     } catch (IOException ex) {
       success = false;
       exception = ex;
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
index 4b11472..e995966 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
@@ -71,6 +71,7 @@
   }
 
   @Override
+  @SuppressWarnings("methodlength")
   public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
       long transactionLogIndex,
       OzoneManagerDoubleBufferHelper ozoneManagerDoubleBufferHelper) {
@@ -149,7 +150,7 @@
             .setIsVersionEnabled(dbBucketInfo.getIsVersionEnabled());
       }
 
-      //Check quotaInBytes and quotaInCounts to update
+      //Check quotaInBytes and quotaInNamespace to update
       String volumeKey = omMetadataManager.getVolumeKey(volumeName);
       OmVolumeArgs omVolumeArgs = omMetadataManager.getVolumeTable()
           .get(volumeKey);
@@ -159,10 +160,12 @@
       } else {
         bucketInfoBuilder.setQuotaInBytes(dbBucketInfo.getQuotaInBytes());
       }
-      if (checkQuotaCountsValid(omVolumeArgs, omBucketArgs)) {
-        bucketInfoBuilder.setQuotaInCounts(omBucketArgs.getQuotaInCounts());
+      if (checkQuotaNamespaceValid(omVolumeArgs, omBucketArgs)) {
+        bucketInfoBuilder.setQuotaInNamespace(
+            omBucketArgs.getQuotaInNamespace());
       } else {
-        bucketInfoBuilder.setQuotaInCounts(dbBucketInfo.getQuotaInCounts());
+        bucketInfoBuilder.setQuotaInNamespace(
+            dbBucketInfo.getQuotaInNamespace());
       }
 
       bucketInfoBuilder.setCreationTime(dbBucketInfo.getCreationTime());
@@ -179,6 +182,9 @@
 
       // Set the updateID to current transaction log index
       bucketInfoBuilder.setUpdateID(transactionLogIndex);
+      // Quota used remains unchanged
+      bucketInfoBuilder.setUsedBytes(dbBucketInfo.getUsedBytes());
+      bucketInfoBuilder.setUsedNamespace(dbBucketInfo.getUsedNamespace());
 
       omBucketInfo = bucketInfoBuilder.build();
 
@@ -227,6 +233,13 @@
       throws IOException {
     long quotaInBytes = omBucketArgs.getQuotaInBytes();
 
+    if (quotaInBytes == OzoneConsts.QUOTA_RESET &&
+        omVolumeArgs.getQuotaInBytes() != OzoneConsts.QUOTA_RESET) {
+      throw new OMException("Can not clear bucket spaceQuota because" +
+          " volume spaceQuota is not cleared.",
+          OMException.ResultCodes.QUOTA_ERROR);
+    }
+
     if (quotaInBytes == 0) {
       return false;
     }
@@ -257,11 +270,12 @@
     return true;
   }
 
-  public boolean checkQuotaCountsValid(OmVolumeArgs omVolumeArgs,
+  public boolean checkQuotaNamespaceValid(OmVolumeArgs omVolumeArgs,
       OmBucketArgs omBucketArgs) {
-    long quotaInCounts = omBucketArgs.getQuotaInCounts();
+    long quotaInNamespace = omBucketArgs.getQuotaInNamespace();
 
-    if ((quotaInCounts <= 0 && quotaInCounts != OzoneConsts.QUOTA_RESET)) {
+    if ((quotaInNamespace <= 0
+         && quotaInNamespace != OzoneConsts.QUOTA_RESET)) {
       return false;
     }
     return true;
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAclRequest.java
index 6bda132..f1a504f 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAclRequest.java
@@ -20,9 +20,12 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 import com.google.common.base.Optional;
 import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.AuditLogger;
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.OMMetrics;
 import org.apache.hadoop.ozone.om.OzoneManager;
@@ -145,9 +148,14 @@
       }
     }
 
+    OzoneObj obj = getObject();
+    Map<String, String> auditMap = obj.toAuditMap();
+    if (ozoneAcls != null) {
+      auditMap.put(OzoneConsts.ACL, ozoneAcls.toString());
+    }
 
-    onComplete(operationResult, exception, ozoneManager.getMetrics());
-
+    onComplete(operationResult, exception, ozoneManager.getMetrics(),
+        ozoneManager.getAuditLogger(), auditMap);
     return omClientResponse;
   }
 
@@ -164,6 +172,13 @@
    */
   abstract String getPath();
 
+
+  /**
+   * Get the Bucket object Info from the request.
+   * @return OzoneObjInfo
+   */
+  abstract OzoneObj getObject();
+
   // TODO: Finer grain metrics can be moved to these callbacks. They can also
   // be abstracted into separate interfaces in future.
   /**
@@ -198,10 +213,11 @@
    * @param operationResult
    * @param exception
    * @param omMetrics
+   * @param auditLogger
+   * @param auditMap
    */
   abstract void onComplete(boolean operationResult, IOException exception,
-      OMMetrics omMetrics);
-
-
+      OMMetrics omMetrics, AuditLogger auditLogger,
+      Map<String, String> auditMap);
 }
 
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAddAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAddAclRequest.java
index 565871e..be575fc 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAddAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketAddAclRequest.java
@@ -22,13 +22,18 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 import com.google.common.collect.Lists;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OMMetrics;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
 import org.apache.hadoop.ozone.util.BooleanBiFunction;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
@@ -57,6 +62,7 @@
   private static BooleanBiFunction<List<OzoneAcl>, OmBucketInfo> bucketAddAclOp;
   private String path;
   private List<OzoneAcl> ozoneAcls;
+  private OzoneObj obj;
 
   static {
     bucketAddAclOp = (ozoneAcls, omBucketInfo) -> {
@@ -81,7 +87,8 @@
     super(omRequest, bucketAddAclOp);
     OzoneManagerProtocolProtos.AddAclRequest addAclRequest =
         getOmRequest().getAddAclRequest();
-    path = addAclRequest.getObj().getPath();
+    obj = OzoneObjInfo.fromProtobuf(addAclRequest.getObj());
+    path = obj.getPath();
     ozoneAcls = Lists.newArrayList(
         OzoneAcl.fromProtobuf(addAclRequest.getAcl()));
   }
@@ -97,6 +104,11 @@
   }
 
   @Override
+  OzoneObj getObject() {
+    return obj;
+  }
+
+  @Override
   OMResponse.Builder onInit() {
     return OmResponseUtil.getOMResponseBuilder(getOmRequest());
   }
@@ -119,7 +131,11 @@
 
   @Override
   void onComplete(boolean operationResult, IOException exception,
-      OMMetrics omMetrics) {
+      OMMetrics omMetrics, AuditLogger auditLogger,
+      Map<String, String> auditMap) {
+    auditLog(auditLogger, buildAuditMessage(OMAction.ADD_ACL, auditMap,
+        exception, getOmRequest().getUserInfo()));
+
     if (operationResult) {
       LOG.debug("Add acl: {} to path: {} success!", getAcls(), getPath());
     } else {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketRemoveAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketRemoveAclRequest.java
index 3932c0f..6f58886 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketRemoveAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketRemoveAclRequest.java
@@ -22,11 +22,16 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -54,6 +59,7 @@
   private static BooleanBiFunction<List<OzoneAcl>, OmBucketInfo> bucketAddAclOp;
   private String path;
   private List<OzoneAcl> ozoneAcls;
+  private OzoneObj obj;
 
   static {
     bucketAddAclOp = (ozoneAcls, omBucketInfo) -> {
@@ -78,7 +84,8 @@
     super(omRequest, bucketAddAclOp);
     OzoneManagerProtocolProtos.RemoveAclRequest removeAclRequest =
         getOmRequest().getRemoveAclRequest();
-    path = removeAclRequest.getObj().getPath();
+    obj = OzoneObjInfo.fromProtobuf(removeAclRequest.getObj());
+    path = obj.getPath();
     ozoneAcls = Lists.newArrayList(
         OzoneAcl.fromProtobuf(removeAclRequest.getAcl()));
   }
@@ -94,6 +101,11 @@
   }
 
   @Override
+  OzoneObj getObject() {
+    return obj;
+  }
+
+  @Override
   OMResponse.Builder onInit() {
     return OmResponseUtil.getOMResponseBuilder(getOmRequest());
   }
@@ -116,7 +128,11 @@
 
   @Override
   void onComplete(boolean operationResult, IOException exception,
-      OMMetrics omMetrics) {
+      OMMetrics omMetrics, AuditLogger auditLogger,
+      Map<String, String> auditMap) {
+    auditLog(auditLogger, buildAuditMessage(OMAction.REMOVE_ACL, auditMap,
+        exception, getOmRequest().getUserInfo()));
+
     if (operationResult) {
       LOG.debug("Remove acl: {} for path: {} success!", getAcls(), getPath());
     } else {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java
index e4e64ba..c7d1e8f 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/acl/OMBucketSetAclRequest.java
@@ -23,11 +23,16 @@
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.List;
+import java.util.Map;
 
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -54,6 +59,7 @@
         OmBucketInfo > bucketAddAclOp;
   private String path;
   private List<OzoneAcl> ozoneAcls;
+  private OzoneObj obj;
 
   static {
     bucketAddAclOp = (ozoneAcls, omBucketInfo) -> {
@@ -78,7 +84,8 @@
     super(omRequest, bucketAddAclOp);
     OzoneManagerProtocolProtos.SetAclRequest setAclRequest =
         getOmRequest().getSetAclRequest();
-    path = setAclRequest.getObj().getPath();
+    obj = OzoneObjInfo.fromProtobuf(setAclRequest.getObj());
+    path = obj.getPath();
     ozoneAcls = new ArrayList<>();
     setAclRequest.getAclList().forEach(aclInfo ->
         ozoneAcls.add(OzoneAcl.fromProtobuf(aclInfo)));
@@ -95,6 +102,11 @@
   }
 
   @Override
+  OzoneObj getObject() {
+    return obj;
+  }
+
+  @Override
   OMResponse.Builder onInit() {
     return OmResponseUtil.getOMResponseBuilder(getOmRequest());
   }
@@ -117,7 +129,11 @@
 
   @Override
   void onComplete(boolean operationResult, IOException exception,
-      OMMetrics omMetrics) {
+      OMMetrics omMetrics, AuditLogger auditLogger,
+      Map<String, String> auditMap){
+    auditLog(auditLogger, buildAuditMessage(OMAction.SET_ACL, auditMap,
+        exception, getOmRequest().getUserInfo()));
+
     if (operationResult) {
       if (LOG.isDebugEnabled()) {
         LOG.debug("Set acl: {} for path: {} success!", getAcls(), getPath());
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
index 71e2601..c722f97 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
@@ -150,6 +150,7 @@
     OMClientResponse omClientResponse = null;
     Result result = Result.FAILURE;
     List<OmKeyInfo> missingParentInfos;
+    int numMissingParents = 0;
 
     try {
       keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
@@ -201,6 +202,7 @@
         missingParentInfos = getAllParentInfo(ozoneManager, keyArgs,
             missingParents, inheritAcls, trxnLogIndex);
 
+        numMissingParents = missingParentInfos.size();
         OMFileRequest.addKeyTableCacheEntries(omMetadataManager, volumeName,
             bucketName, Optional.of(dirKeyInfo),
             Optional.of(missingParentInfos), trxnLogIndex);
@@ -230,8 +232,8 @@
     auditLog(auditLogger, buildAuditMessage(OMAction.CREATE_DIRECTORY,
         auditMap, exception, userInfo));
 
-    logResult(createDirectoryRequest, keyArgs, omMetrics, result, trxnLogIndex,
-        exception);
+    logResult(createDirectoryRequest, keyArgs, omMetrics, result,
+        exception, numMissingParents);
 
     return omClientResponse;
   }
@@ -291,8 +293,8 @@
   }
 
   private void logResult(CreateDirectoryRequest createDirectoryRequest,
-      KeyArgs keyArgs, OMMetrics omMetrics, Result result, long trxnLogIndex,
-      IOException exception) {
+      KeyArgs keyArgs, OMMetrics omMetrics, Result result,
+      IOException exception, int numMissingParents) {
 
     String volumeName = keyArgs.getVolumeName();
     String bucketName = keyArgs.getBucketName();
@@ -300,7 +302,8 @@
 
     switch (result) {
     case SUCCESS:
-      omMetrics.incNumKeys();
+      // Count for the missing parents plus the directory being created.
+      omMetrics.incNumKeys(numMissingParents + 1);
       if (LOG.isDebugEnabled()) {
         LOG.debug("Directory created. Volume:{}, Bucket:{}, Key:{}",
             volumeName, bucketName, keyName);
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
index 8d3e700..76e404e 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
@@ -32,7 +32,6 @@
 import org.apache.hadoop.ozone.OzoneAcl;
 import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.om.OMConfigKeys;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.apache.hadoop.ozone.om.response.file.OMFileCreateResponse;
 import org.slf4j.Logger;
@@ -166,6 +165,7 @@
     String volumeName = keyArgs.getVolumeName();
     String bucketName = keyArgs.getBucketName();
     String keyName = keyArgs.getKeyName();
+    int numMissingParents = 0;
 
     // if isRecursive is true, file would be created even if parent
     // directories does not exist.
@@ -186,7 +186,6 @@
     boolean acquiredLock = false;
 
     OmKeyInfo omKeyInfo = null;
-    OmVolumeArgs omVolumeArgs = null;
     OmBucketInfo omBucketInfo = null;
     final List<OmKeyLocationInfo> locations = new ArrayList<>();
     List<OmKeyInfo> missingParentInfos;
@@ -278,13 +277,13 @@
           .collect(Collectors.toList());
       omKeyInfo.appendNewBlocks(newLocationList, false);
 
-      omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
       omBucketInfo = getBucketInfo(omMetadataManager, volumeName, bucketName);
       // check bucket and volume quota
       long preAllocatedSpace = newLocationList.size()
           * ozoneManager.getScmBlockSize()
           * omKeyInfo.getFactor().getNumber();
       checkBucketQuotaInBytes(omBucketInfo, preAllocatedSpace);
+      checkBucketQuotaInNamespace(omBucketInfo, 1L);
 
       // Add to cache entry can be done outside of lock for this openKey.
       // Even if bucket gets deleted, when commitKey we shall identify if
@@ -300,7 +299,10 @@
           trxnLogIndex);
 
       omBucketInfo.incrUsedBytes(preAllocatedSpace);
+      // Update namespace quota
+      omBucketInfo.incrUsedNamespace(1L);
 
+      numMissingParents = missingParentInfos.size();
       // Prepare response
       omResponse.setCreateFileResponse(CreateFileResponse.newBuilder()
           .setKeyInfo(omKeyInfo.getProtobuf())
@@ -308,8 +310,7 @@
           .setOpenVersion(openVersion).build())
           .setCmdType(CreateFile);
       omClientResponse = new OMFileCreateResponse(omResponse.build(),
-          omKeyInfo, missingParentInfos, clientID, omVolumeArgs,
-          omBucketInfo.copyObject());
+          omKeyInfo, missingParentInfos, clientID, omBucketInfo.copyObject());
 
       result = Result.SUCCESS;
     } catch (IOException ex) {
@@ -335,6 +336,9 @@
 
     switch (result) {
     case SUCCESS:
+      // Missing directories are created immediately, counting that here.
+      // The metric for the file is incremented as part of the file commit.
+      omMetrics.incNumKeys(numMissingParents);
       LOG.debug("File created. Volume:{}, Bucket:{}, Key:{}", volumeName,
           bucketName, keyName);
       break;
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
index 0035371..961acdf 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
@@ -26,7 +26,6 @@
 import com.google.common.base.Optional;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
@@ -168,7 +167,6 @@
 
     OmKeyInfo openKeyInfo = null;
     IOException exception = null;
-    OmVolumeArgs omVolumeArgs = null;
     OmBucketInfo omBucketInfo = null;
     boolean acquiredLock = false;
 
@@ -197,7 +195,6 @@
 
       List<OmKeyLocationInfo> newLocationList = Collections.singletonList(
           OmKeyLocationInfo.getFromProtobuf(blockLocation));
-      omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
 
       acquiredLock = omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK,
           volumeName, bucketName);
@@ -222,11 +219,10 @@
           new CacheValue<>(Optional.of(openKeyInfo), trxnLogIndex));
 
       omBucketInfo.incrUsedBytes(preAllocatedSpace);
-
       omResponse.setAllocateBlockResponse(AllocateBlockResponse.newBuilder()
           .setKeyLocation(blockLocation).build());
       omClientResponse = new OMAllocateBlockResponse(omResponse.build(),
-          openKeyInfo, clientID, omVolumeArgs, omBucketInfo.copyObject());
+          openKeyInfo, clientID, omBucketInfo.copyObject());
 
       LOG.debug("Allocated block for Volume:{}, Bucket:{}, OpenKey:{}",
           volumeName, bucketName, openKeyName);
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
index 00a70b3..84b53a8 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
@@ -30,7 +30,7 @@
 import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.om.OMConfigKeys;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.helpers.OzoneFSUtils;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
@@ -124,7 +124,6 @@
 
     IOException exception = null;
     OmKeyInfo omKeyInfo = null;
-    OmVolumeArgs omVolumeArgs = null;
     OmBucketInfo omBucketInfo = null;
     OMClientResponse omClientResponse = null;
     boolean bucketLockAcquired = false;
@@ -166,6 +165,14 @@
           throw new OMException("Can not create file: " + keyName +
               " as there is already directory in the given path", NOT_A_FILE);
         }
+        // Ensure the parent exist.
+        if (!"".equals(OzoneFSUtils.getParent(keyName))
+            && !checkDirectoryAlreadyExists(volumeName, bucketName,
+            OzoneFSUtils.getParent(keyName), omMetadataManager)) {
+          throw new OMException("Cannot create file : " + keyName
+              + " as parent directory doesn't exist",
+              OMException.ResultCodes.DIRECTORY_NOT_FOUND);
+        }
       }
 
       omKeyInfo = omMetadataManager.getOpenKeyTable().get(dbOpenKey);
@@ -194,7 +201,6 @@
 
       long scmBlockSize = ozoneManager.getScmBlockSize();
       int factor = omKeyInfo.getFactor().getNumber();
-      omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
       omBucketInfo = getBucketInfo(omMetadataManager, volumeName, bucketName);
       // Block was pre-requested and UsedBytes updated when createKey and
       // AllocatedBlock. The space occupied by the Key shall be based on
@@ -205,8 +211,7 @@
       omBucketInfo.incrUsedBytes(correctedSpace);
 
       omClientResponse = new OMKeyCommitResponse(omResponse.build(),
-          omKeyInfo, dbOzoneKey, dbOpenKey, omVolumeArgs,
-          omBucketInfo.copyObject());
+          omKeyInfo, dbOzoneKey, dbOpenKey, omBucketInfo.copyObject());
 
       result = Result.SUCCESS;
     } catch (IOException ex) {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
index ca706ef..406fd72 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
@@ -31,7 +31,6 @@
 import org.apache.hadoop.ozone.OzoneAcl;
 import org.apache.hadoop.ozone.om.OMConfigKeys;
 import org.apache.hadoop.ozone.om.exceptions.OMException;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.request.file.OMDirectoryCreateRequest;
 import org.apache.hadoop.ozone.om.request.file.OMFileRequest;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
@@ -198,7 +197,6 @@
 
     OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
     OmKeyInfo omKeyInfo = null;
-    OmVolumeArgs omVolumeArgs = null;
     OmBucketInfo omBucketInfo = null;
     final List< OmKeyLocationInfo > locations = new ArrayList<>();
 
@@ -209,6 +207,7 @@
     IOException exception = null;
     Result result = null;
     List<OmKeyInfo> missingParentInfos = null;
+    int numMissingParents = 0;
     try {
       keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
       volumeName = keyArgs.getVolumeName();
@@ -268,7 +267,7 @@
         OMFileRequest.addKeyTableCacheEntries(omMetadataManager, volumeName,
             bucketName, Optional.absent(), Optional.of(missingParentInfos),
             trxnLogIndex);
-
+        numMissingParents = missingParentInfos.size();
       }
 
       omKeyInfo = prepareKeyInfo(omMetadataManager, keyArgs, dbKeyInfo,
@@ -288,7 +287,6 @@
           .collect(Collectors.toList());
       omKeyInfo.appendNewBlocks(newLocationList, false);
 
-      omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
       omBucketInfo = getBucketInfo(omMetadataManager, volumeName, bucketName);
       // Here we refer to the implementation of HDFS:
       // If the key size is 600MB, when createKey, keyLocationInfo in
@@ -302,6 +300,7 @@
           * omKeyInfo.getFactor().getNumber();
       // check bucket and volume quota
       checkBucketQuotaInBytes(omBucketInfo, preAllocatedSpace);
+      checkBucketQuotaInNamespace(omBucketInfo, 1L);
 
       // Add to cache entry can be done outside of lock for this openKey.
       // Even if bucket gets deleted, when commitKey we shall identify if
@@ -311,6 +310,8 @@
           new CacheValue<>(Optional.of(omKeyInfo), trxnLogIndex));
 
       omBucketInfo.incrUsedBytes(preAllocatedSpace);
+      // Update namespace quota
+      omBucketInfo.incrUsedNamespace(1L);
 
       // Prepare response
       omResponse.setCreateKeyResponse(CreateKeyResponse.newBuilder()
@@ -319,8 +320,7 @@
           .setOpenVersion(openVersion).build())
           .setCmdType(Type.CreateKey);
       omClientResponse = new OMKeyCreateResponse(omResponse.build(),
-          omKeyInfo, missingParentInfos, clientID, omVolumeArgs,
-          omBucketInfo.copyObject());
+          omKeyInfo, missingParentInfos, clientID, omBucketInfo.copyObject());
 
       result = Result.SUCCESS;
     } catch (IOException ex) {
@@ -346,6 +346,9 @@
 
     switch (result) {
     case SUCCESS:
+      // Missing directories are created immediately, counting that here.
+      // The metric for the key is incremented as part of the key commit.
+      omMetrics.incNumKeys(numMissingParents);
       LOG.debug("Key created. Volume:{}, Bucket:{}, Key:{}", volumeName,
           bucketName, keyName);
       break;
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
index ed95687..b1a426f 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
@@ -23,7 +23,6 @@
 
 import com.google.common.base.Optional;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
@@ -110,7 +109,6 @@
     boolean acquiredLock = false;
     OMClientResponse omClientResponse = null;
     Result result = null;
-    OmVolumeArgs omVolumeArgs = null;
     OmBucketInfo omBucketInfo = null;
     try {
       keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
@@ -144,11 +142,11 @@
               keyName)),
           new CacheValue<>(Optional.absent(), trxnLogIndex));
 
-      omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
       omBucketInfo = getBucketInfo(omMetadataManager, volumeName, bucketName);
 
       long quotaReleased = sumBlockLengths(omKeyInfo);
       omBucketInfo.incrUsedBytes(-quotaReleased);
+      omBucketInfo.incrUsedNamespace(-1L);
 
       // No need to add cache entries to delete table. As delete table will
       // be used by DeleteKeyService only, not used for any client response
@@ -157,7 +155,7 @@
 
       omClientResponse = new OMKeyDeleteResponse(omResponse
           .setDeleteKeyResponse(DeleteKeyResponse.newBuilder()).build(),
-          omKeyInfo, ozoneManager.isRatisEnabled(), omVolumeArgs,
+          omKeyInfo, ozoneManager.isRatisEnabled(),
           omBucketInfo.copyObject());
 
       result = Result.SUCCESS;
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
index 553f7f0..bb671df 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
@@ -32,7 +32,6 @@
 import com.google.common.base.Optional;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
-import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
 import org.apache.hadoop.ozone.OzoneAcl;
 import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.om.PrefixManager;
@@ -74,6 +73,8 @@
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.security.UserGroupInformation;
 
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.BlockTokenSecretProto.AccessModeProto.READ;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.BlockTokenSecretProto.AccessModeProto.WRITE;
 import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
 import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
     .BUCKET_NOT_FOUND;
@@ -143,7 +144,8 @@
       if (grpcBlockTokenEnabled) {
         builder.setToken(secretManager
             .generateToken(remoteUser, allocatedBlock.getBlockID().toString(),
-                getAclForUser(remoteUser), scmBlockSize));
+                EnumSet.of(READ, WRITE),
+                scmBlockSize));
       }
       locationInfos.add(builder.build());
     }
@@ -159,18 +161,6 @@
   }
 
   /**
-   * Return acl for user.
-   * @param user
-   *
-   * */
-  private EnumSet< HddsProtos.BlockTokenSecretProto.AccessModeProto>
-      getAclForUser(String user) {
-    // TODO: Return correct acl for user.
-    return EnumSet.allOf(
-        HddsProtos.BlockTokenSecretProto.AccessModeProto.class);
-  }
-
-  /**
    * Validate bucket and volume exists or not.
    * @param omMetadataManager
    * @param volumeName
@@ -591,6 +581,25 @@
   }
 
   /**
+   * Check namespace quota.
+   */
+  protected void checkBucketQuotaInNamespace(OmBucketInfo omBucketInfo,
+      long allocatedNamespace) throws IOException {
+    if (omBucketInfo.getQuotaInNamespace() > OzoneConsts.QUOTA_RESET) {
+      long usedNamespace = omBucketInfo.getUsedNamespace();
+      long quotaInNamespace = omBucketInfo.getQuotaInNamespace();
+      long toUseNamespaceInTotal = usedNamespace + allocatedNamespace;
+      if (quotaInNamespace < toUseNamespaceInTotal) {
+        throw new OMException("The namespace quota of Bucket:"
+            + omBucketInfo.getBucketName() + " exceeded: quotaInNamespace: "
+            + quotaInNamespace + " but namespace consumed: "
+            + toUseNamespaceInTotal + ".",
+            OMException.ResultCodes.QUOTA_EXCEEDED);
+      }
+    }
+  }
+
+  /**
    * Check directory exists. If exists return true, else false.
    * @param volumeName
    * @param bucketName
@@ -610,30 +619,6 @@
   }
 
   /**
-   * Return volume info for the specified volume. If the volume does not
-   * exist, returns {@code null}.
-   * @param omMetadataManager
-   * @param volume
-   * @return OmVolumeArgs
-   * @throws IOException
-   */
-  protected OmVolumeArgs getVolumeInfo(OMMetadataManager omMetadataManager,
-      String volume) {
-
-    OmVolumeArgs volumeArgs = null;
-
-    CacheValue<OmVolumeArgs> value =
-        omMetadataManager.getVolumeTable().getCacheValue(
-        new CacheKey<>(omMetadataManager.getVolumeKey(volume)));
-
-    if (value != null) {
-      volumeArgs = value.getCacheValue();
-    }
-
-    return volumeArgs;
-  }
-
-  /**
    * @return the number of bytes used by blocks pointed to by {@code omKeyInfo}.
    */
   protected static long sumBlockLengths(OmKeyInfo omKeyInfo) {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeysDeleteRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeysDeleteRequest.java
index c4c5f9c..7617b2f 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeysDeleteRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeysDeleteRequest.java
@@ -29,7 +29,6 @@
 import org.apache.hadoop.ozone.om.ResolvedBucket;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.apache.hadoop.ozone.om.response.OMClientResponse;
@@ -157,7 +156,6 @@
       }
 
       long quotaReleased = 0;
-      OmVolumeArgs omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
       OmBucketInfo omBucketInfo =
           getBucketInfo(omMetadataManager, volumeName, bucketName);
 
@@ -172,14 +170,14 @@
         quotaReleased += sumBlockLengths(omKeyInfo);
       }
       omBucketInfo.incrUsedBytes(-quotaReleased);
+      omBucketInfo.incrUsedNamespace(-1L * omKeyInfoList.size());
 
       omClientResponse = new OMKeysDeleteResponse(omResponse
           .setDeleteKeysResponse(DeleteKeysResponse.newBuilder()
               .setStatus(deleteStatus).setUnDeletedKeys(unDeletedKeys))
           .setStatus(deleteStatus ? OK : PARTIAL_DELETE)
           .setSuccess(deleteStatus).build(), omKeyInfoList,
-          ozoneManager.isRatisEnabled(), omVolumeArgs,
-          omBucketInfo.copyObject());
+          ozoneManager.isRatisEnabled(), omBucketInfo.copyObject());
 
       result = Result.SUCCESS;
 
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAclRequest.java
index 68d621d..74990e9 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAclRequest.java
@@ -19,8 +19,10 @@
 package org.apache.hadoop.ozone.om.request.key.acl;
 
 import java.io.IOException;
+import java.util.Map;
 
 import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.audit.AuditLogger;
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.exceptions.OMException;
@@ -130,7 +132,10 @@
       }
     }
 
-    onComplete(result, operationResult, exception, trxnLogIndex);
+    OzoneObj obj = getObject();
+    Map<String, String> auditMap = obj.toAuditMap();
+    onComplete(result, operationResult, exception, trxnLogIndex,
+        ozoneManager.getAuditLogger(), auditMap);
 
     return omClientResponse;
   }
@@ -141,6 +146,12 @@
    */
   abstract String getPath();
 
+  /**
+   * Get Key object Info from the request.
+   * @return OzoneObjInfo
+   */
+  abstract OzoneObj getObject();
+
   // TODO: Finer grain metrics can be moved to these callbacks. They can also
   // be abstracted into separate interfaces in future.
   /**
@@ -178,7 +189,8 @@
    * @param exception
    */
   abstract void onComplete(Result result, boolean operationResult,
-      IOException exception, long trxnLogIndex);
+      IOException exception, long trxnLogIndex, AuditLogger auditLogger,
+      Map<String, String> auditMap);
 
   /**
    * Apply the acl operation, if successfully completed returns true,
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java
index 6a4922c..abbe80c 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyAddAclRequest.java
@@ -22,9 +22,13 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 import com.google.common.collect.Lists;
 import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
@@ -32,6 +36,8 @@
 import org.apache.hadoop.ozone.om.response.key.acl.OMKeyAclResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -64,12 +70,14 @@
 
   private String path;
   private List<OzoneAcl> ozoneAcls;
+  private OzoneObj obj;
 
   public OMKeyAddAclRequest(OMRequest omRequest) {
     super(omRequest);
     OzoneManagerProtocolProtos.AddAclRequest addAclRequest =
         getOmRequest().getAddAclRequest();
-    path = addAclRequest.getObj().getPath();
+    obj = OzoneObjInfo.fromProtobuf(addAclRequest.getObj());
+    path = obj.getPath();
     ozoneAcls = Lists.newArrayList(
         OzoneAcl.fromProtobuf(addAclRequest.getAcl()));
   }
@@ -80,6 +88,11 @@
   }
 
   @Override
+  OzoneObj getObject() {
+    return obj;
+  }
+
+  @Override
   OMResponse.Builder onInit() {
     return OmResponseUtil.getOMResponseBuilder(getOmRequest());
   }
@@ -95,7 +108,8 @@
 
   @Override
   void onComplete(Result result, boolean operationResult,
-      IOException exception, long trxnLogIndex) {
+      IOException exception, long trxnLogIndex, AuditLogger auditLogger,
+      Map<String, String> auditMap) {
     switch (result) {
     case SUCCESS:
       if (LOG.isDebugEnabled()) {
@@ -113,6 +127,12 @@
       LOG.error("Unrecognized Result for OMKeyAddAclRequest: {}",
           getOmRequest());
     }
+
+    if (ozoneAcls != null) {
+      auditMap.put(OzoneConsts.ACL, ozoneAcls.toString());
+    }
+    auditLog(auditLogger, buildAuditMessage(OMAction.ADD_ACL, auditMap,
+        exception, getOmRequest().getUserInfo()));
   }
 
   @Override
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyRemoveAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyRemoveAclRequest.java
index 2484958..8608f6f 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyRemoveAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeyRemoveAclRequest.java
@@ -22,9 +22,13 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 import com.google.common.collect.Lists;
 import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
@@ -32,6 +36,8 @@
 import org.apache.hadoop.ozone.om.response.key.acl.OMKeyAclResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -64,12 +70,14 @@
 
   private String path;
   private List<OzoneAcl> ozoneAcls;
+  private OzoneObj obj;
 
   public OMKeyRemoveAclRequest(OMRequest omRequest) {
     super(omRequest);
     OzoneManagerProtocolProtos.RemoveAclRequest removeAclRequest =
         getOmRequest().getRemoveAclRequest();
-    path = removeAclRequest.getObj().getPath();
+    obj = OzoneObjInfo.fromProtobuf(removeAclRequest.getObj());
+    path = obj.getPath();
     ozoneAcls = Lists.newArrayList(
         OzoneAcl.fromProtobuf(removeAclRequest.getAcl()));
   }
@@ -80,6 +88,11 @@
   }
 
   @Override
+  OzoneObj getObject() {
+    return obj;
+  }
+
+  @Override
   OMResponse.Builder onInit() {
     return OmResponseUtil.getOMResponseBuilder(getOmRequest());
   }
@@ -95,7 +108,8 @@
 
   @Override
   void onComplete(Result result, boolean operationResult,
-      IOException exception, long trxnLogIndex) {
+      IOException exception, long trxnLogIndex, AuditLogger auditLogger,
+      Map<String, String> auditMap) {
     switch (result) {
     case SUCCESS:
       if (LOG.isDebugEnabled()) {
@@ -114,6 +128,12 @@
       LOG.error("Unrecognized Result for OMKeyRemoveAclRequest: {}",
           getOmRequest());
     }
+
+    if (ozoneAcls != null) {
+      auditMap.put(OzoneConsts.ACL, ozoneAcls.toString());
+    }
+    auditLog(auditLogger, buildAuditMessage(OMAction.REMOVE_ACL, auditMap,
+        exception, getOmRequest().getUserInfo()));
   }
 
   @Override
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeySetAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeySetAclRequest.java
index a5d736f..31c165e 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeySetAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/OMKeySetAclRequest.java
@@ -22,9 +22,13 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 import com.google.common.collect.Lists;
 import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
 import org.apache.hadoop.ozone.om.helpers.OzoneAclUtil;
@@ -33,6 +37,8 @@
 import org.apache.hadoop.ozone.om.response.key.acl.OMKeyAclResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -65,12 +71,14 @@
 
   private String path;
   private List<OzoneAcl> ozoneAcls;
+  private OzoneObj obj;
 
   public OMKeySetAclRequest(OMRequest omRequest) {
     super(omRequest);
     OzoneManagerProtocolProtos.SetAclRequest setAclRequest =
         getOmRequest().getSetAclRequest();
-    path = setAclRequest.getObj().getPath();
+    obj = OzoneObjInfo.fromProtobuf(setAclRequest.getObj());
+    path = obj.getPath();
     ozoneAcls = Lists.newArrayList(
         OzoneAclUtil.fromProtobuf(setAclRequest.getAclList()));
   }
@@ -81,6 +89,11 @@
   }
 
   @Override
+  OzoneObj getObject() {
+    return obj;
+  }
+
+  @Override
   OMResponse.Builder onInit() {
     return OmResponseUtil.getOMResponseBuilder(getOmRequest());
   }
@@ -96,7 +109,8 @@
 
   @Override
   void onComplete(Result result, boolean operationResult,
-      IOException exception, long trxnLogIndex) {
+      IOException exception, long trxnLogIndex, AuditLogger auditLogger,
+      Map<String, String> auditMap) {
     switch (result) {
     case SUCCESS:
       if (LOG.isDebugEnabled()) {
@@ -110,6 +124,12 @@
       LOG.error("Unrecognized Result for OMKeySetAclRequest: {}",
           getOmRequest());
     }
+
+    if (ozoneAcls != null) {
+      auditMap.put(OzoneConsts.ACL, ozoneAcls.toString());
+    }
+    auditLog(auditLogger, buildAuditMessage(OMAction.SET_ACL, auditMap,
+        exception, getOmRequest().getUserInfo()));
   }
 
   @Override
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAclRequest.java
index 6fbd7d2..d4372ce 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAclRequest.java
@@ -19,8 +19,10 @@
 package org.apache.hadoop.ozone.om.request.key.acl.prefix;
 
 import java.io.IOException;
+import java.util.Map;
 
 import com.google.common.base.Optional;
+import org.apache.hadoop.ozone.audit.AuditLogger;
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.OMMetrics;
 import org.apache.hadoop.ozone.om.OzoneManager;
@@ -138,8 +140,10 @@
       }
     }
 
+    OzoneObj obj = getOzoneObj();
+    Map<String, String> auditMap = obj.toAuditMap();
     onComplete(opResult, exception, ozoneManager.getMetrics(), result,
-        trxnLogIndex);
+        trxnLogIndex, ozoneManager.getAuditLogger(), auditMap);
 
     return omClientResponse;
   }
@@ -186,7 +190,8 @@
    * @param omMetrics
    */
   abstract void onComplete(boolean operationResult, IOException exception,
-      OMMetrics omMetrics, Result result, long trxnLogIndex);
+      OMMetrics omMetrics, Result result, long trxnLogIndex,
+      AuditLogger auditLogger, Map<String, String> auditMap);
 
   /**
    * Apply the acl operation, if successfully completed returns true,
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAddAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAddAclRequest.java
index e4dcea6..ada76ab 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAddAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixAddAclRequest.java
@@ -22,8 +22,12 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 import com.google.common.collect.Lists;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OMMetrics;
 import org.apache.hadoop.ozone.om.PrefixManagerImpl;
 import org.apache.hadoop.ozone.om.PrefixManagerImpl.OMPrefixAclOpResult;
@@ -96,7 +100,8 @@
 
   @Override
   void onComplete(boolean operationResult, IOException exception,
-      OMMetrics omMetrics, Result result, long trxnLogIndex) {
+      OMMetrics omMetrics, Result result, long trxnLogIndex,
+      AuditLogger auditLogger, Map<String, String> auditMap) {
     switch (result) {
     case SUCCESS:
       if (LOG.isDebugEnabled()) {
@@ -118,6 +123,12 @@
       LOG.error("Unrecognized Result for OMPrefixAddAclRequest: {}",
           getOmRequest());
     }
+
+    if (ozoneAcls != null) {
+      auditMap.put(OzoneConsts.ACL, ozoneAcls.toString());
+    }
+    auditLog(auditLogger, buildAuditMessage(OMAction.ADD_ACL, auditMap,
+        exception, getOmRequest().getUserInfo()));
   }
 
   @Override
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixRemoveAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixRemoveAclRequest.java
index 7af93ae..fd26e77 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixRemoveAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixRemoveAclRequest.java
@@ -22,8 +22,12 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 import com.google.common.collect.Lists;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OMMetrics;
 import org.apache.hadoop.ozone.om.PrefixManagerImpl;
 import org.apache.hadoop.ozone.om.PrefixManagerImpl.OMPrefixAclOpResult;
@@ -93,7 +97,8 @@
 
   @Override
   void onComplete(boolean operationResult, IOException exception,
-      OMMetrics omMetrics, Result result, long trxnLogIndex) {
+      OMMetrics omMetrics, Result result, long trxnLogIndex,
+      AuditLogger auditLogger, Map<String, String> auditMap) {
     switch (result) {
     case SUCCESS:
       if (LOG.isDebugEnabled()) {
@@ -115,6 +120,12 @@
       LOG.error("Unrecognized Result for OMPrefixRemoveAclRequest: {}",
           getOmRequest());
     }
+
+    if (ozoneAcls != null) {
+      auditMap.put(OzoneConsts.ACL, ozoneAcls.toString());
+    }
+    auditLog(auditLogger, buildAuditMessage(OMAction.REMOVE_ACL, auditMap,
+        exception, getOmRequest().getUserInfo()));
   }
 
   @Override
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixSetAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixSetAclRequest.java
index a0afece..31e87b7 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixSetAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/acl/prefix/OMPrefixSetAclRequest.java
@@ -23,7 +23,11 @@
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.List;
+import java.util.Map;
 
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OMMetrics;
 import org.apache.hadoop.ozone.om.PrefixManagerImpl;
 import org.apache.hadoop.ozone.om.PrefixManagerImpl.OMPrefixAclOpResult;
@@ -94,7 +98,8 @@
 
   @Override
   void onComplete(boolean operationResult, IOException exception,
-      OMMetrics omMetrics, Result result, long trxnLogIndex) {
+      OMMetrics omMetrics, Result result, long trxnLogIndex,
+      AuditLogger auditLogger, Map<String, String> auditMap) {
     switch (result) {
     case SUCCESS:
       if (LOG.isDebugEnabled()) {
@@ -111,6 +116,12 @@
       LOG.error("Unrecognized Result for OMPrefixSetAclRequest: {}",
           getOmRequest());
     }
+
+    if (ozoneAcls != null) {
+      auditMap.put(OzoneConsts.ACL, ozoneAcls.toString());
+    }
+    auditLog(auditLogger, buildAuditMessage(OMAction.SET_ACL, auditMap,
+        exception, getOmRequest().getUserInfo()));
   }
 
   @Override
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java
index d6b9d34..b65ed4d 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadAbortRequest.java
@@ -24,7 +24,6 @@
 
 import com.google.common.base.Optional;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
 import org.slf4j.Logger;
@@ -107,7 +106,6 @@
         getOmRequest());
     OMClientResponse omClientResponse = null;
     Result result = null;
-    OmVolumeArgs omVolumeArgs = null;
     OmBucketInfo omBucketInfo = null;
     try {
       keyArgs = resolveBucketLink(ozoneManager, keyArgs, auditMap);
@@ -126,7 +124,6 @@
 
       OmKeyInfo omKeyInfo =
           omMetadataManager.getOpenKeyTable().get(multipartKey);
-      omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
       omBucketInfo = getBucketInfo(omMetadataManager, volumeName, bucketName);
 
       // If there is no entry in openKeyTable, then there is no multipart
@@ -169,7 +166,7 @@
           omResponse.setAbortMultiPartUploadResponse(
               MultipartUploadAbortResponse.newBuilder()).build(),
           multipartKey, multipartKeyInfo, ozoneManager.isRatisEnabled(),
-          omVolumeArgs, omBucketInfo.copyObject());
+          omBucketInfo.copyObject());
 
       result = Result.SUCCESS;
     } catch (IOException ex) {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCommitPartRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCommitPartRequest.java
index d84dfee..3b50272 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCommitPartRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/multipart/S3MultipartUploadCommitPartRequest.java
@@ -28,7 +28,6 @@
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
 import org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
 import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
 import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
@@ -117,7 +116,6 @@
     String multipartKey = null;
     OmMultipartKeyInfo multipartKeyInfo = null;
     Result result = null;
-    OmVolumeArgs omVolumeArgs = null;
     OmBucketInfo omBucketInfo = null;
     OmBucketInfo copyBucketInfo = null;
     try {
@@ -215,7 +213,6 @@
 
       long scmBlockSize = ozoneManager.getScmBlockSize();
       int factor = omKeyInfo.getFactor().getNumber();
-      omVolumeArgs = getVolumeInfo(omMetadataManager, volumeName);
       omBucketInfo = getBucketInfo(omMetadataManager, volumeName, bucketName);
       // Block was pre-requested and UsedBytes updated when createKey and
       // AllocatedBlock. The space occupied by the Key shall be based on
@@ -231,7 +228,7 @@
       omClientResponse = new S3MultipartUploadCommitPartResponse(
           omResponse.build(), multipartKey, openKey,
           multipartKeyInfo, oldPartKeyInfo, omKeyInfo,
-          ozoneManager.isRatisEnabled(), omVolumeArgs,
+          ozoneManager.isRatisEnabled(),
           omBucketInfo.copyObject());
 
       result = Result.SUCCESS;
@@ -241,7 +238,7 @@
       omClientResponse = new S3MultipartUploadCommitPartResponse(
           createErrorOMResponse(omResponse, exception), multipartKey, openKey,
           multipartKeyInfo, oldPartKeyInfo, omKeyInfo,
-          ozoneManager.isRatisEnabled(), omVolumeArgs, copyBucketInfo);
+          ozoneManager.isRatisEnabled(), copyBucketInfo);
     } finally {
       addResponseToDoubleBuffer(trxnLogIndex, omClientResponse,
           omDoubleBufferHelper);
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/upgrade/OMPrepareRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/upgrade/OMPrepareRequest.java
index 6b57bd8..3900a56 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/upgrade/OMPrepareRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/upgrade/OMPrepareRequest.java
@@ -223,8 +223,9 @@
       snapshotIndex = raftLogIndex;
     }
 
+    // TODO : avijayanhwx, Ethan Rose, please check and replace.
     CompletableFuture<Long> purgeFuture =
-        raftLog.syncWithSnapshot(snapshotIndex);
+        raftLog.onSnapshotInstalled(snapshotIndex);
 
     try {
       Long purgeIndex = purgeFuture.get();
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetQuotaRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetQuotaRequest.java
index b967c10..8ac23f4 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetQuotaRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeSetQuotaRequest.java
@@ -136,11 +136,12 @@
       } else {
         omVolumeArgs.setQuotaInBytes(omVolumeArgs.getQuotaInBytes());
       }
-      if (checkQuotaCountsValid(setVolumePropertyRequest.getQuotaInCounts())) {
-        omVolumeArgs.setQuotaInCounts(
-            setVolumePropertyRequest.getQuotaInCounts());
+      if (checkQuotaNamespaceValid(
+          setVolumePropertyRequest.getQuotaInNamespace())) {
+        omVolumeArgs.setQuotaInNamespace(
+            setVolumePropertyRequest.getQuotaInNamespace());
       } else {
-        omVolumeArgs.setQuotaInCounts(omVolumeArgs.getQuotaInCounts());
+        omVolumeArgs.setQuotaInNamespace(omVolumeArgs.getQuotaInNamespace());
       }
 
       omVolumeArgs.setUpdateID(transactionLogIndex,
@@ -211,9 +212,10 @@
     return true;
   }
 
-  public boolean checkQuotaCountsValid(long quotaInCounts) {
+  public boolean checkQuotaNamespaceValid(long quotaInNamespace) {
 
-    if ((quotaInCounts <= 0 && quotaInCounts != OzoneConsts.QUOTA_RESET)) {
+    if ((quotaInNamespace <= 0
+         && quotaInNamespace != OzoneConsts.QUOTA_RESET)) {
       return false;
     }
     return true;
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java
index c3d2620..b91aef8 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAclRequest.java
@@ -21,6 +21,8 @@
 import com.google.common.base.Optional;
 import org.apache.hadoop.hdds.scm.storage.CheckedBiFunction;
 import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.AuditLogger;
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.OMMetrics;
 import org.apache.hadoop.ozone.om.OzoneManager;
@@ -38,6 +40,7 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.VOLUME_LOCK;
 
@@ -131,7 +134,13 @@
       }
     }
 
-    onComplete(result, exception, trxnLogIndex);
+    OzoneObj obj = getObject();
+    Map<String, String> auditMap = obj.toAuditMap();
+    if (ozoneAcls != null) {
+      auditMap.put(OzoneConsts.ACL, ozoneAcls.toString());
+    }
+    onComplete(result, exception, trxnLogIndex, ozoneManager.getAuditLogger(),
+        auditMap);
 
     return omClientResponse;
   }
@@ -151,6 +160,12 @@
    */
   abstract String getVolumeName();
 
+  /**
+   * Get the Volume object Info from the request.
+   * @return OzoneObjInfo
+   */
+  abstract OzoneObj getObject();
+
   // TODO: Finer grain metrics can be moved to these callbacks. They can also
   // be abstracted into separate interfaces in future.
   /**
@@ -184,5 +199,6 @@
    * Usually used for logging without lock.
    * @param ex
    */
-  abstract void onComplete(Result result, IOException ex, long trxnLogIndex);
+  abstract void onComplete(Result result, IOException ex, long trxnLogIndex,
+      AuditLogger auditLogger, Map<String, String> auditMap);
 }
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAddAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAddAclRequest.java
index 50afa1a..03956a2 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAddAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAddAclRequest.java
@@ -23,6 +23,8 @@
 import com.google.common.collect.Lists;
 import org.apache.hadoop.hdds.scm.storage.CheckedBiFunction;
 import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
@@ -33,12 +35,15 @@
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 /**
  * Handles volume add acl request.
@@ -69,6 +74,7 @@
 
   private List<OzoneAcl> ozoneAcls;
   private String volumeName;
+  private OzoneObj obj;
 
   public OMVolumeAddAclRequest(OMRequest omRequest) {
     super(omRequest, volumeAddAclOp);
@@ -77,7 +83,8 @@
     Preconditions.checkNotNull(addAclRequest);
     ozoneAcls = Lists.newArrayList(
         OzoneAcl.fromProtobuf(addAclRequest.getAcl()));
-    volumeName = addAclRequest.getObj().getPath().substring(1);
+    obj = OzoneObjInfo.fromProtobuf(addAclRequest.getObj());
+    volumeName = obj.getPath().substring(1);
   }
 
   @Override
@@ -94,6 +101,10 @@
     return ozoneAcls.get(0);
   }
 
+  @Override
+  OzoneObj getObject() {
+    return obj;
+  }
 
   @Override
   OMResponse.Builder onInit() {
@@ -115,7 +126,8 @@
   }
 
   @Override
-  void onComplete(Result result, IOException ex, long trxnLogIndex) {
+  void onComplete(Result result, IOException ex, long trxnLogIndex,
+      AuditLogger auditLogger, Map<String, String> auditMap) {
     switch (result) {
     case SUCCESS:
       if (LOG.isDebugEnabled()) {
@@ -131,6 +143,9 @@
       LOG.error("Unrecognized Result for OMVolumeAddAclRequest: {}",
           getOmRequest());
     }
+
+    auditLog(auditLogger, buildAuditMessage(OMAction.ADD_ACL, auditMap,
+        ex, getOmRequest().getUserInfo()));
   }
 
   public static String getRequestType() {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeRemoveAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeRemoveAclRequest.java
index cc5ac72..9277c04 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeRemoveAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeRemoveAclRequest.java
@@ -23,6 +23,8 @@
 import com.google.common.collect.Lists;
 import org.apache.hadoop.hdds.scm.storage.CheckedBiFunction;
 import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
@@ -33,12 +35,15 @@
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 
 /**
  * Handles volume remove acl request.
@@ -69,6 +74,7 @@
 
   private List<OzoneAcl> ozoneAcls;
   private String volumeName;
+  private OzoneObj obj;
 
   public OMVolumeRemoveAclRequest(OMRequest omRequest) {
     super(omRequest, volumeRemoveAclOp);
@@ -77,7 +83,8 @@
     Preconditions.checkNotNull(removeAclRequest);
     ozoneAcls = Lists.newArrayList(
         OzoneAcl.fromProtobuf(removeAclRequest.getAcl()));
-    volumeName = removeAclRequest.getObj().getPath().substring(1);
+    obj = OzoneObjInfo.fromProtobuf(removeAclRequest.getObj());
+    volumeName = obj.getPath().substring(1);
   }
 
   @Override
@@ -95,6 +102,11 @@
   }
 
   @Override
+  OzoneObj getObject() {
+    return obj;
+  }
+
+  @Override
   OMResponse.Builder onInit() {
     return OmResponseUtil.getOMResponseBuilder(getOmRequest());
   }
@@ -114,7 +126,8 @@
   }
 
   @Override
-  void onComplete(Result result, IOException ex, long trxnLogIndex) {
+  void onComplete(Result result, IOException ex, long trxnLogIndex,
+      AuditLogger auditLogger, Map<String, String> auditMap) {
     switch (result) {
     case SUCCESS:
       if (LOG.isDebugEnabled()) {
@@ -130,6 +143,8 @@
       LOG.error("Unrecognized Result for OMVolumeRemoveAclRequest: {}",
           getOmRequest());
     }
+    auditLog(auditLogger, buildAuditMessage(OMAction.REMOVE_ACL, auditMap,
+        ex, getOmRequest().getUserInfo()));
   }
 
   public static String getRequestType() {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java
index 0c56af5..d6a054c 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeSetAclRequest.java
@@ -22,6 +22,8 @@
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.hdds.scm.storage.CheckedBiFunction;
 import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.om.OzoneManager;
 import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
@@ -32,6 +34,8 @@
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OzoneObj.ObjectType;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
 import org.apache.hadoop.util.Time;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -39,6 +43,7 @@
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.List;
+import java.util.Map;
 
 /**
  * Handles volume set acl request.
@@ -69,6 +74,7 @@
 
   private List<OzoneAcl> ozoneAcls;
   private String volumeName;
+  private OzoneObj obj;
 
   public OMVolumeSetAclRequest(OMRequest omRequest) {
     super(omRequest, volumeSetAclOp);
@@ -78,7 +84,8 @@
     ozoneAcls = new ArrayList<>();
     setAclRequest.getAclList().forEach(oai ->
         ozoneAcls.add(OzoneAcl.fromProtobuf(oai)));
-    volumeName = setAclRequest.getObj().getPath().substring(1);
+    obj = OzoneObjInfo.fromProtobuf(setAclRequest.getObj());
+    volumeName = obj.getPath().substring(1);
   }
 
   @Override
@@ -92,6 +99,11 @@
   }
 
   @Override
+  OzoneObj getObject() {
+    return obj;
+  }
+
+  @Override
   OMResponse.Builder onInit() {
     return OmResponseUtil.getOMResponseBuilder(getOmRequest());
   }
@@ -111,7 +123,8 @@
   }
 
   @Override
-  void onComplete(Result result, IOException ex, long trxnLogIndex) {
+  void onComplete(Result result, IOException ex, long trxnLogIndex,
+      AuditLogger auditLogger, Map<String, String> auditMap) {
     switch (result) {
     case SUCCESS:
       if (LOG.isDebugEnabled()) {
@@ -127,6 +140,9 @@
       LOG.error("Unrecognized Result for OMVolumeSetAclRequest: {}",
           getOmRequest());
     }
+
+    auditLog(auditLogger, buildAuditMessage(OMAction.SET_ACL, auditMap,
+        ex, getOmRequest().getUserInfo()));
   }
 
   public static String getRequestType() {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
index cb1f322..ca74a6f 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
@@ -22,6 +22,7 @@
 
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.om.response.OMClientResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
@@ -40,11 +41,20 @@
 public final class OMBucketCreateResponse extends OMClientResponse {
 
   private final OmBucketInfo omBucketInfo;
+  private final OmVolumeArgs omVolumeArgs;
+
+  public OMBucketCreateResponse(@Nonnull OMResponse omResponse,
+      @Nonnull OmBucketInfo omBucketInfo, @Nonnull OmVolumeArgs omVolumeArgs) {
+    super(omResponse);
+    this.omBucketInfo = omBucketInfo;
+    this.omVolumeArgs = omVolumeArgs;
+  }
 
   public OMBucketCreateResponse(@Nonnull OMResponse omResponse,
       @Nonnull OmBucketInfo omBucketInfo) {
     super(omResponse);
     this.omBucketInfo = omBucketInfo;
+    this.omVolumeArgs = null;
   }
 
   /**
@@ -55,6 +65,7 @@
     super(omResponse);
     checkStatusNotOK();
     omBucketInfo = null;
+    omVolumeArgs = null;
   }
 
   @Override
@@ -66,6 +77,12 @@
             omBucketInfo.getBucketName());
     omMetadataManager.getBucketTable().putWithBatch(batchOperation,
         dbBucketKey, omBucketInfo);
+    // update volume usedNamespace
+    if (omVolumeArgs != null) {
+      omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
+              omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
+              omVolumeArgs);
+    }
   }
 
   @Nullable
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java
index c3c7fef..00247aa 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java
@@ -21,6 +21,7 @@
 import java.io.IOException;
 
 import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.om.response.OMClientResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
@@ -39,12 +40,22 @@
 
   private String volumeName;
   private String bucketName;
+  private final OmVolumeArgs omVolumeArgs;
+
+  public OMBucketDeleteResponse(@Nonnull OMResponse omResponse,
+      String volumeName, String bucketName, OmVolumeArgs volumeArgs) {
+    super(omResponse);
+    this.volumeName = volumeName;
+    this.bucketName = bucketName;
+    this.omVolumeArgs = volumeArgs;
+  }
 
   public OMBucketDeleteResponse(@Nonnull OMResponse omResponse,
       String volumeName, String bucketName) {
     super(omResponse);
     this.volumeName = volumeName;
     this.bucketName = bucketName;
+    this.omVolumeArgs = null;
   }
 
   /**
@@ -54,6 +65,7 @@
   public OMBucketDeleteResponse(@Nonnull OMResponse omResponse) {
     super(omResponse);
     checkStatusNotOK();
+    this.omVolumeArgs = null;
   }
 
   @Override
@@ -64,6 +76,12 @@
         omMetadataManager.getBucketKey(volumeName, bucketName);
     omMetadataManager.getBucketTable().deleteWithBatch(batchOperation,
         dbBucketKey);
+    // update volume usedNamespace
+    if (omVolumeArgs != null) {
+      omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
+              omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
+              omVolumeArgs);
+    }
   }
 
   public String getVolumeName() {
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/file/OMFileCreateResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/file/OMFileCreateResponse.java
index de490c5..8b60dc2 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/file/OMFileCreateResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/file/OMFileCreateResponse.java
@@ -22,7 +22,6 @@
 
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
@@ -41,10 +40,10 @@
 
   public OMFileCreateResponse(@Nonnull OMResponse omResponse,
       @Nonnull OmKeyInfo omKeyInfo, @Nonnull List<OmKeyInfo> parentKeyInfos,
-      long openKeySessionID, @Nonnull OmVolumeArgs omVolumeArgs,
+      long openKeySessionID,
       @Nonnull OmBucketInfo omBucketInfo) {
     super(omResponse, omKeyInfo, parentKeyInfos, openKeySessionID,
-        omVolumeArgs, omBucketInfo);
+        omBucketInfo);
   }
 
   /**
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMAllocateBlockResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMAllocateBlockResponse.java
index acc43ee..4b20853 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMAllocateBlockResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMAllocateBlockResponse.java
@@ -21,7 +21,6 @@
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.om.response.OMClientResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
@@ -41,16 +40,14 @@
 
   private OmKeyInfo omKeyInfo;
   private long clientID;
-  private OmVolumeArgs omVolumeArgs;
   private OmBucketInfo omBucketInfo;
 
   public OMAllocateBlockResponse(@Nonnull OMResponse omResponse,
       @Nonnull OmKeyInfo omKeyInfo, long clientID,
-      @Nonnull OmVolumeArgs omVolumeArgs, @Nonnull OmBucketInfo omBucketInfo) {
+      @Nonnull OmBucketInfo omBucketInfo) {
     super(omResponse);
     this.omKeyInfo = omKeyInfo;
     this.clientID = clientID;
-    this.omVolumeArgs = omVolumeArgs;
     this.omBucketInfo = omBucketInfo;
   }
 
@@ -74,7 +71,7 @@
 
     // update bucket usedBytes.
     omMetadataManager.getBucketTable().putWithBatch(batchOperation,
-        omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
-            omBucketInfo.getBucketName()), omBucketInfo);
+        omMetadataManager.getBucketKey(omKeyInfo.getVolumeName(),
+            omKeyInfo.getBucketName()), omBucketInfo);
   }
 }
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCommitResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCommitResponse.java
index 8e2f6dc..5d43b27 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCommitResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCommitResponse.java
@@ -21,7 +21,6 @@
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.om.response.OMClientResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
@@ -42,17 +41,15 @@
   private OmKeyInfo omKeyInfo;
   private String ozoneKeyName;
   private String openKeyName;
-  private OmVolumeArgs omVolumeArgs;
   private OmBucketInfo omBucketInfo;
 
   public OMKeyCommitResponse(@Nonnull OMResponse omResponse,
       @Nonnull OmKeyInfo omKeyInfo, String ozoneKeyName, String openKeyName,
-      @Nonnull OmVolumeArgs omVolumeArgs, @Nonnull OmBucketInfo omBucketInfo) {
+      @Nonnull OmBucketInfo omBucketInfo) {
     super(omResponse);
     this.omKeyInfo = omKeyInfo;
     this.ozoneKeyName = ozoneKeyName;
     this.openKeyName = openKeyName;
-    this.omVolumeArgs = omVolumeArgs;
     this.omBucketInfo = omBucketInfo;
   }
 
@@ -78,7 +75,7 @@
 
     // update bucket usedBytes.
     omMetadataManager.getBucketTable().putWithBatch(batchOperation,
-        omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+        omMetadataManager.getBucketKey(omBucketInfo.getVolumeName(),
             omBucketInfo.getBucketName()), omBucketInfo);
   }
 
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java
index 60f6bfe..98b1927 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java
@@ -25,7 +25,6 @@
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.om.response.OMClientResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
@@ -48,18 +47,15 @@
   private OmKeyInfo omKeyInfo;
   private long openKeySessionID;
   private List<OmKeyInfo> parentKeyInfos;
-  private OmVolumeArgs omVolumeArgs;
   private OmBucketInfo omBucketInfo;
 
   public OMKeyCreateResponse(@Nonnull OMResponse omResponse,
       @Nonnull OmKeyInfo omKeyInfo, List<OmKeyInfo> parentKeyInfos,
-      long openKeySessionID, @Nonnull OmVolumeArgs omVolumeArgs,
-      @Nonnull OmBucketInfo omBucketInfo) {
+      long openKeySessionID, @Nonnull OmBucketInfo omBucketInfo) {
     super(omResponse);
     this.omKeyInfo = omKeyInfo;
     this.openKeySessionID = openKeySessionID;
     this.parentKeyInfos = parentKeyInfos;
-    this.omVolumeArgs = omVolumeArgs;
     this.omBucketInfo = omBucketInfo;
   }
 
@@ -102,8 +98,8 @@
 
     // update bucket usedBytes.
     omMetadataManager.getBucketTable().putWithBatch(batchOperation,
-        omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
-            omBucketInfo.getBucketName()), omBucketInfo);
+        omMetadataManager.getBucketKey(omKeyInfo.getVolumeName(),
+            omKeyInfo.getBucketName()), omBucketInfo);
   }
 }
 
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyDeleteResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyDeleteResponse.java
index e856701..58785c0 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyDeleteResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyDeleteResponse.java
@@ -22,7 +22,6 @@
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
     .OMResponse;
@@ -41,15 +40,13 @@
 public class OMKeyDeleteResponse extends AbstractOMKeyDeleteResponse {
 
   private OmKeyInfo omKeyInfo;
-  private OmVolumeArgs omVolumeArgs;
   private OmBucketInfo omBucketInfo;
 
   public OMKeyDeleteResponse(@Nonnull OMResponse omResponse,
       @Nonnull OmKeyInfo omKeyInfo, boolean isRatisEnabled,
-      @Nonnull OmVolumeArgs omVolumeArgs, @Nonnull OmBucketInfo omBucketInfo) {
+      @Nonnull OmBucketInfo omBucketInfo) {
     super(omResponse, isRatisEnabled);
     this.omKeyInfo = omKeyInfo;
-    this.omVolumeArgs = omVolumeArgs;
     this.omBucketInfo = omBucketInfo;
   }
 
@@ -75,7 +72,7 @@
 
     // update bucket usedBytes.
     omMetadataManager.getBucketTable().putWithBatch(batchOperation,
-        omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+        omMetadataManager.getBucketKey(omBucketInfo.getVolumeName(),
             omBucketInfo.getBucketName()), omBucketInfo);
   }
 }
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeysDeleteResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeysDeleteResponse.java
index 00a23fc..8a6a4a2 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeysDeleteResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeysDeleteResponse.java
@@ -23,7 +23,6 @@
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
 
@@ -41,16 +40,13 @@
 @CleanupTableInfo(cleanupTables = KEY_TABLE)
 public class OMKeysDeleteResponse extends AbstractOMKeyDeleteResponse {
   private List<OmKeyInfo> omKeyInfoList;
-  private OmVolumeArgs omVolumeArgs;
   private OmBucketInfo omBucketInfo;
 
   public OMKeysDeleteResponse(@Nonnull OMResponse omResponse,
       @Nonnull List<OmKeyInfo> keyDeleteList,
-      boolean isRatisEnabled, @Nonnull OmVolumeArgs omVolumeArgs,
-      @Nonnull OmBucketInfo omBucketInfo) {
+      boolean isRatisEnabled, @Nonnull OmBucketInfo omBucketInfo) {
     super(omResponse, isRatisEnabled);
     this.omKeyInfoList = keyDeleteList;
-    this.omVolumeArgs = omVolumeArgs;
     this.omBucketInfo = omBucketInfo;
   }
 
@@ -91,7 +87,7 @@
 
     // update bucket usedBytes.
     omMetadataManager.getBucketTable().putWithBatch(batchOperation,
-        omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+        omMetadataManager.getBucketKey(omBucketInfo.getVolumeName(),
             omBucketInfo.getBucketName()), omBucketInfo);
   }
 }
\ No newline at end of file
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadAbortResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadAbortResponse.java
index b11a732..d641875 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadAbortResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadAbortResponse.java
@@ -23,7 +23,6 @@
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
 import org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.om.response.OMClientResponse;
@@ -52,18 +51,15 @@
   private String multipartKey;
   private OmMultipartKeyInfo omMultipartKeyInfo;
   private boolean isRatisEnabled;
-  private OmVolumeArgs omVolumeArgs;
   private OmBucketInfo omBucketInfo;
 
   public S3MultipartUploadAbortResponse(@Nonnull OMResponse omResponse,
       String multipartKey, @Nonnull OmMultipartKeyInfo omMultipartKeyInfo,
-      boolean isRatisEnabled, @Nonnull OmVolumeArgs omVolumeArgs,
-      @Nonnull OmBucketInfo omBucketInfo) {
+      boolean isRatisEnabled, @Nonnull OmBucketInfo omBucketInfo) {
     super(omResponse);
     this.multipartKey = multipartKey;
     this.omMultipartKeyInfo = omMultipartKeyInfo;
     this.isRatisEnabled = isRatisEnabled;
-    this.omVolumeArgs = omVolumeArgs;
     this.omBucketInfo = omBucketInfo;
   }
 
@@ -106,7 +102,7 @@
 
       // update bucket usedBytes.
       omMetadataManager.getBucketTable().putWithBatch(batchOperation,
-          omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+          omMetadataManager.getBucketKey(omBucketInfo.getVolumeName(),
               omBucketInfo.getBucketName()), omBucketInfo);
     }
   }
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadCommitPartResponse.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadCommitPartResponse.java
index 496175f..c2b119b 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadCommitPartResponse.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/multipart/S3MultipartUploadCommitPartResponse.java
@@ -23,7 +23,6 @@
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
 import org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
 import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
 import org.apache.hadoop.ozone.om.response.OMClientResponse;
@@ -58,7 +57,6 @@
   private OzoneManagerProtocolProtos.PartKeyInfo oldPartKeyInfo;
   private OmKeyInfo openPartKeyInfoToBeDeleted;
   private boolean isRatisEnabled;
-  private OmVolumeArgs omVolumeArgs;
   private OmBucketInfo omBucketInfo;
 
   /**
@@ -78,8 +76,7 @@
       @Nullable OmMultipartKeyInfo omMultipartKeyInfo,
       @Nullable OzoneManagerProtocolProtos.PartKeyInfo oldPartKeyInfo,
       @Nullable OmKeyInfo openPartKeyInfoToBeDeleted,
-      boolean isRatisEnabled, @Nonnull OmVolumeArgs omVolumeArgs,
-      @Nonnull OmBucketInfo omBucketInfo) {
+      boolean isRatisEnabled, @Nonnull OmBucketInfo omBucketInfo) {
     super(omResponse);
     this.multipartKey = multipartKey;
     this.openKey = openKey;
@@ -87,7 +84,6 @@
     this.oldPartKeyInfo = oldPartKeyInfo;
     this.openPartKeyInfoToBeDeleted = openPartKeyInfoToBeDeleted;
     this.isRatisEnabled = isRatisEnabled;
-    this.omVolumeArgs = omVolumeArgs;
     this.omBucketInfo = omBucketInfo;
   }
 
@@ -154,7 +150,7 @@
 
     // update bucket usedBytes.
     omMetadataManager.getBucketTable().putWithBatch(batchOperation,
-        omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+        omMetadataManager.getBucketKey(omBucketInfo.getVolumeName(),
             omBucketInfo.getBucketName()), omBucketInfo);
   }
 }
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
index 9d77e50..807c150 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
@@ -17,10 +17,10 @@
 package org.apache.hadoop.ozone.protocolPB;
 
 import static org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils.getRequest;
-import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type.PrepareStatus;
+//import static org.apache.hadoop.ozone.protocol.proto
+// .OzoneManagerProtocolProtos.Type.PrepareStatus;
 
 import java.io.IOException;
-import java.util.Optional;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.atomic.AtomicLong;
 
@@ -29,10 +29,12 @@
 import org.apache.hadoop.hdds.utils.ProtocolMessageMetrics;
 import org.apache.hadoop.ozone.OmUtils;
 import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMLeaderNotReadyException;
 import org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException;
 import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolPB;
 import org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer;
 import org.apache.hadoop.ozone.om.ratis.OzoneManagerRatisServer;
+import org.apache.hadoop.ozone.om.ratis.OzoneManagerRatisServer.RaftServerStatus;
 import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerRatisUtils;
 import org.apache.hadoop.ozone.om.request.OMClientRequest;
 import org.apache.hadoop.ozone.om.response.OMClientResponse;
@@ -47,6 +49,9 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static org.apache.hadoop.ozone.om.ratis.OzoneManagerRatisServer.RaftServerStatus.LEADER_AND_READY;
+import static org.apache.hadoop.ozone.om.ratis.OzoneManagerRatisServer.RaftServerStatus.NOT_LEADER;
+
 /**
  * This class is the server-side translator that forwards requests received on
  * {@link OzoneManagerProtocolPB}
@@ -123,13 +128,14 @@
 
   private OMResponse processRequest(OMRequest request) throws
       ServiceException {
-
+    RaftServerStatus raftServerStatus;
     if (isRatisEnabled) {
       // Check if the request is a read only request
       if (OmUtils.isReadOnly(request)) {
         return submitReadRequestToOM(request);
       } else {
-        if (omRatisServer.isLeader()) {
+        raftServerStatus = omRatisServer.checkLeaderStatus();
+        if (raftServerStatus == LEADER_AND_READY) {
           try {
             OMClientRequest omClientRequest = getRequest(ozoneManager, request);
             request = omClientRequest.preExecute(ozoneManager);
@@ -139,12 +145,7 @@
           }
           return submitRequestToRatis(request);
         } else {
-          // throw not leader exception. This is being done, so to avoid
-          // unnecessary execution of preExecute on follower OM's. This
-          // will be helpful in the case like where we we reduce the
-          // chance of allocate blocks on follower OM's. Right now our
-          // leader status is updated every 1 second.
-          throw createNotLeaderException();
+          throw createLeaderErrorException(raftServerStatus);
         }
       }
     } else {
@@ -186,26 +187,22 @@
   private OMResponse submitReadRequestToOM(OMRequest request)
       throws ServiceException {
     // Check if this OM is the leader.
-    if (omRatisServer.isLeader() ||
-        request.getCmdType().equals(PrepareStatus)) {
+    RaftServerStatus raftServerStatus = omRatisServer.checkLeaderStatus();
+    if (raftServerStatus == LEADER_AND_READY) {
       return handler.handleReadRequest(request);
     } else {
-      throw createNotLeaderException();
+      throw createLeaderErrorException(raftServerStatus);
     }
   }
 
   private ServiceException createNotLeaderException() {
     RaftPeerId raftPeerId = omRatisServer.getRaftPeerId();
-    Optional<RaftPeerId> leaderRaftPeerId = omRatisServer
-        .getCachedLeaderPeerId();
 
-    OMNotLeaderException notLeaderException;
-    if (leaderRaftPeerId.isPresent()) {
-      notLeaderException = new OMNotLeaderException(
-          raftPeerId, leaderRaftPeerId.get());
-    } else {
-      notLeaderException = new OMNotLeaderException(raftPeerId);
-    }
+    // TODO: Set suggest leaderID. Right now, client is not using suggest
+    // leaderID. Need to fix this.
+
+    OMNotLeaderException notLeaderException =
+        new OMNotLeaderException(raftPeerId);
 
     if (LOG.isDebugEnabled()) {
       LOG.debug(notLeaderException.getMessage());
@@ -214,6 +211,26 @@
     return new ServiceException(notLeaderException);
   }
 
+  private ServiceException createLeaderErrorException(
+      RaftServerStatus raftServerStatus) {
+    if (raftServerStatus == NOT_LEADER) {
+      return createNotLeaderException();
+    } else {
+      return createLeaderNotReadyException();
+    }
+  }
+
+
+  private ServiceException createLeaderNotReadyException() {
+    RaftPeerId raftPeerId = omRatisServer.getRaftPeerId();
+
+    OMLeaderNotReadyException leaderNotReadyException =
+        new OMLeaderNotReadyException(raftPeerId.toString() + " is Leader " +
+            "but not ready to process request");
+
+    return new ServiceException(leaderNotReadyException);
+  }
+
   /**
    * Submits request directly to OM.
    */
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
index 10e23ed..5a8416b 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
@@ -423,7 +423,7 @@
         request.getPrefix(),
         request.getCount());
     for (OmKeyInfo key : keys) {
-      resp.addKeyInfo(key.getProtobuf());
+      resp.addKeyInfo(key.getProtobuf(true));
     }
 
     return resp.build();
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
index 575c9ea..0a0e947 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/AWSV4AuthValidator.java
@@ -26,7 +26,6 @@
 import javax.crypto.spec.SecretKeySpec;
 import java.io.UnsupportedEncodingException;
 import java.net.URLDecoder;
-import java.nio.charset.Charset;
 import java.nio.charset.StandardCharsets;
 import java.security.GeneralSecurityException;
 import java.security.MessageDigest;
@@ -42,14 +41,13 @@
   private final static Logger LOG =
       LoggerFactory.getLogger(AWSV4AuthValidator.class);
   private static final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
-  private static final Charset UTF_8 = Charset.forName("utf-8");
 
   private AWSV4AuthValidator() {
   }
 
   private static String urlDecode(String str) {
     try {
-      return URLDecoder.decode(str, UTF_8.name());
+      return URLDecoder.decode(str, StandardCharsets.UTF_8.name());
     } catch (UnsupportedEncodingException e) {
       throw new RuntimeException(e);
     }
@@ -57,7 +55,7 @@
 
   public static String hash(String payload) throws NoSuchAlgorithmException {
     MessageDigest md = MessageDigest.getInstance("SHA-256");
-    md.update(payload.getBytes(UTF_8));
+    md.update(payload.getBytes(StandardCharsets.UTF_8));
     return String.format("%064x", new java.math.BigInteger(1, md.digest()));
   }
 
@@ -91,7 +89,8 @@
     String dateStamp = signData[0];
     String regionName = signData[1];
     String serviceName = signData[2];
-    byte[] kDate = sign(("AWS4" + key).getBytes(UTF_8), dateStamp);
+    byte[] kDate = sign(("AWS4" + key)
+        .getBytes(StandardCharsets.UTF_8), dateStamp);
     byte[] kRegion = sign(kDate, regionName);
     byte[] kService = sign(kRegion, serviceName);
     byte[] kSigning = sign(kService, "aws4_request");
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
index 96ba50d..fe750ba 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
@@ -31,6 +31,7 @@
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.time.Instant;
 import java.util.EnumSet;
 
 /**
@@ -86,11 +87,10 @@
       String blockId, EnumSet<AccessModeProto> modes, long maxLength) {
     OzoneBlockTokenIdentifier tokenIdentifier = createIdentifier(user,
         blockId, modes, maxLength);
-    if (LOG.isTraceEnabled()) {
+    if (LOG.isDebugEnabled()) {
       long expiryTime = tokenIdentifier.getExpiryDate();
-      String tokenId = tokenIdentifier.toString();
-      LOG.trace("Issued delegation token -> expiryTime:{}, tokenId:{}",
-          expiryTime, tokenId);
+      LOG.info("Issued delegation token -> expiryTime:{}, tokenId:{}",
+          Instant.ofEpochMilli(expiryTime), tokenIdentifier);
     }
     // Pass blockId as service.
     return new Token<>(tokenIdentifier.getBytes(),
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestChunkStreams.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestChunkStreams.java
index 78e1c44..cbde28b 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestChunkStreams.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestChunkStreams.java
@@ -58,7 +58,7 @@
               }
 
               @Override
-              public synchronized long getPos() throws IOException {
+              public synchronized long getPos() {
                 return pos;
               }
 
@@ -114,7 +114,7 @@
               }
 
               @Override
-              public synchronized long getPos() throws IOException {
+              public synchronized long getPos() {
                 return pos;
               }
 
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
index 6046ac9..a947b35 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
@@ -76,8 +76,12 @@
 import org.junit.Test;
 import org.mockito.Mockito;
 
+import static java.util.Collections.emptyList;
 import static java.util.Collections.singletonList;
+import static java.util.Comparator.comparing;
+import static java.util.stream.Collectors.toList;
 import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
 
 /**
  * Unit test key manager.
@@ -91,6 +95,7 @@
 
   private Instant startDate;
   private File testDir;
+  private ScmBlockLocationProtocol blockClient;
 
   @Before
   public void setup() throws IOException {
@@ -100,14 +105,10 @@
         testDir.toString());
     metadataManager = new OmMetadataManagerImpl(configuration);
     containerClient = Mockito.mock(StorageContainerLocationProtocol.class);
+    blockClient = Mockito.mock(ScmBlockLocationProtocol.class);
     keyManager = new KeyManagerImpl(
-        Mockito.mock(ScmBlockLocationProtocol.class),
-        containerClient,
-        metadataManager,
-        configuration,
-        "omtest",
-        Mockito.mock(OzoneBlockTokenSecretManager.class)
-    );
+        blockClient, containerClient, metadataManager, configuration,
+        "omtest", Mockito.mock(OzoneBlockTokenSecretManager.class));
 
     startDate = Instant.now();
   }
@@ -372,10 +373,10 @@
 
     List<ContainerWithPipeline> cps = new ArrayList<>();
     ContainerInfo ci = Mockito.mock(ContainerInfo.class);
-    Mockito.when(ci.getContainerID()).thenReturn(1L);
+    when(ci.getContainerID()).thenReturn(1L);
     cps.add(new ContainerWithPipeline(ci, pipelineTwo));
 
-    Mockito.when(containerClient.getContainerWithPipelineBatch(containerIDs))
+    when(containerClient.getContainerWithPipelineBatch(containerIDs))
         .thenReturn(cps);
 
     final OmVolumeArgs volumeArgs = OmVolumeArgs.newBuilder()
@@ -443,6 +444,7 @@
     String volume = "vol";
     String bucket = "bucket";
     String keyPrefix = "key";
+    String client = "client.host";
 
     TestOMRequestUtils.addVolumeToDB(volume, OzoneConsts.OZONE,
         metadataManager);
@@ -450,17 +452,12 @@
     TestOMRequestUtils.addBucketToDB(volume, bucket, metadataManager);
 
     final Pipeline pipeline = MockPipeline.createPipeline(3);
-
-    OmKeyInfo.Builder keyInfoBuilder = new OmKeyInfo.Builder()
-        .setVolumeName(volume)
-        .setBucketName(bucket)
-        .setCreationTime(Time.now())
-        .setOmKeyLocationInfos(singletonList(
-            new OmKeyLocationInfoGroup(0, new ArrayList<>())))
-        .setReplicationFactor(ReplicationFactor.THREE)
-        .setReplicationType(ReplicationType.RATIS);
+    final List<String> nodes = pipeline.getNodes().stream()
+        .map(DatanodeDetails::getUuidString)
+        .collect(toList());
 
     List<Long> containerIDs = new ArrayList<>();
+    List<ContainerWithPipeline> containersWithPipeline = new ArrayList<>();
     for (long i = 1; i <= 10; i++) {
       final OmKeyLocationInfo keyLocationInfo = new OmKeyLocationInfo.Builder()
           .setBlockID(new BlockID(i, 1L))
@@ -469,9 +466,21 @@
           .setLength(256000)
           .build();
 
+      ContainerInfo containerInfo = new ContainerInfo.Builder()
+          .setContainerID(i)
+          .build();
+      containersWithPipeline.add(
+          new ContainerWithPipeline(containerInfo, pipeline));
       containerIDs.add(i);
 
-      OmKeyInfo keyInfo = keyInfoBuilder
+      OmKeyInfo keyInfo = new OmKeyInfo.Builder()
+          .setVolumeName(volume)
+          .setBucketName(bucket)
+          .setCreationTime(Time.now())
+          .setOmKeyLocationInfos(singletonList(
+              new OmKeyLocationInfoGroup(0, new ArrayList<>())))
+          .setReplicationFactor(ReplicationFactor.THREE)
+          .setReplicationType(ReplicationType.RATIS)
           .setKeyName(keyPrefix + i)
           .setObjectID(i)
           .setUpdateID(i)
@@ -480,15 +489,83 @@
       TestOMRequestUtils.addKeyToOM(metadataManager, keyInfo);
     }
 
+    when(containerClient.getContainerWithPipelineBatch(containerIDs))
+        .thenReturn(containersWithPipeline);
+
     OmKeyArgs.Builder builder = new OmKeyArgs.Builder()
         .setVolumeName(volume)
         .setBucketName(bucket)
-        .setKeyName("");
+        .setKeyName("")
+        .setSortDatanodesInPipeline(true);
     List<OzoneFileStatus> fileStatusList =
-        keyManager.listStatus(builder.build(), false, null, Long.MAX_VALUE);
+        keyManager.listStatus(builder.build(), false,
+            null, Long.MAX_VALUE, client);
 
     Assert.assertEquals(10, fileStatusList.size());
     verify(containerClient).getContainerWithPipelineBatch(containerIDs);
+    verify(blockClient).sortDatanodes(nodes, client);
+  }
+
+  @Test
+  public void sortDatanodes() throws Exception {
+    // GIVEN
+    String client = "anyhost";
+    int pipelineCount = 3;
+    int keysPerPipeline = 5;
+    OmKeyInfo[] keyInfos = new OmKeyInfo[pipelineCount * keysPerPipeline];
+    List<List<String>> expectedSortDatanodesInvocations = new ArrayList<>();
+    Map<Pipeline, List<DatanodeDetails>> expectedSortedNodes = new HashMap<>();
+    int ki = 0;
+    for (int p = 0; p < pipelineCount; p++) {
+      final Pipeline pipeline = MockPipeline.createPipeline(3);
+      final List<String> nodes = pipeline.getNodes().stream()
+          .map(DatanodeDetails::getUuidString)
+          .collect(toList());
+      expectedSortDatanodesInvocations.add(nodes);
+      final List<DatanodeDetails> sortedNodes = pipeline.getNodes().stream()
+          .sorted(comparing(DatanodeDetails::getUuidString))
+          .collect(toList());
+      expectedSortedNodes.put(pipeline, sortedNodes);
+
+      when(blockClient.sortDatanodes(nodes, client))
+          .thenReturn(sortedNodes);
+
+      for (int i = 1; i <= keysPerPipeline; i++) {
+        OmKeyLocationInfo keyLocationInfo = new OmKeyLocationInfo.Builder()
+            .setBlockID(new BlockID(i, 1L))
+            .setPipeline(pipeline)
+            .setOffset(0)
+            .setLength(256000)
+            .build();
+
+        OmKeyInfo keyInfo = new OmKeyInfo.Builder()
+            .setOmKeyLocationInfos(Arrays.asList(
+                new OmKeyLocationInfoGroup(0, emptyList()),
+                new OmKeyLocationInfoGroup(1, singletonList(keyLocationInfo))))
+            .build();
+        keyInfos[ki++] = keyInfo;
+      }
+    }
+
+    // WHEN
+    keyManager.sortDatanodes(client, keyInfos);
+
+    // THEN
+    // verify all key info locations got updated
+    for (OmKeyInfo keyInfo : keyInfos) {
+      OmKeyLocationInfoGroup locations = keyInfo.getLatestVersionLocations();
+      Assert.assertNotNull(locations);
+      for (OmKeyLocationInfo locationInfo : locations.getLocationList()) {
+        Pipeline pipeline = locationInfo.getPipeline();
+        List<DatanodeDetails> expectedOrder = expectedSortedNodes.get(pipeline);
+        Assert.assertEquals(expectedOrder, pipeline.getNodesInOrder());
+      }
+    }
+
+    // expect one invocation per pipeline
+    for (List<String> nodes : expectedSortDatanodesInvocations) {
+      verify(blockClient).sortDatanodes(nodes, client);
+    }
   }
 
 }
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOMDBDefinition.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOMDBDefinition.java
new file mode 100644
index 0000000..73e9ea5
--- /dev/null
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOMDBDefinition.java
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.utils.db.DBColumnFamilyDefinition;
+import org.apache.hadoop.hdds.utils.db.DBStore;
+import org.apache.hadoop.ozone.om.codec.OMDBDefinition;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.File;
+import java.util.ArrayList;
+import java.util.Collection;
+
+/**
+ * Test that all the tables are covered both by OMDBDefinition
+ * as well as OmMetadataManagerImpl.
+ */
+public class TestOMDBDefinition {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  @Test
+  public void testDBDefinition() throws Exception {
+    OzoneConfiguration configuration = new OzoneConfiguration();
+    File metaDir = folder.getRoot();
+    DBStore store = OmMetadataManagerImpl.loadDB(configuration, metaDir);
+    OMDBDefinition dbDef = new OMDBDefinition();
+
+    // Get list of tables from DB Definitions
+    DBColumnFamilyDefinition[] columnFamilyDefinitions =
+        dbDef.getColumnFamilies();
+    int countOmDefTables = columnFamilyDefinitions.length;
+    ArrayList<String> missingDBDefTables = new ArrayList<>();
+
+    // Get list of tables from the RocksDB Store
+    Collection<String> missingOmDBTables =
+        store.getTableNames().values();
+    missingOmDBTables.remove("default");
+    int countOmDBTables = missingOmDBTables.size();
+    // Remove the file if it is found in both the datastructures
+    for(DBColumnFamilyDefinition definition : columnFamilyDefinitions) {
+      if (!missingOmDBTables.remove(definition.getName())) {
+        missingDBDefTables.add(definition.getName());
+      }
+    }
+
+    Assert.assertEquals("Tables in OmMetadataManagerImpl are:"
+            + missingDBDefTables, 0, missingDBDefTables.size());
+    Assert.assertEquals("Tables missing in OMDBDefinition are:"
+        + missingOmDBTables, 0, missingOmDBTables.size());
+    Assert.assertEquals(countOmDBTables, countOmDefTables);
+  }
+}
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
index 2364005..8e36f7d 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
@@ -325,19 +325,21 @@
     String volumeNameB = "volumeB";
     String ozoneBucket = "ozoneBucket";
     String hadoopBucket = "hadoopBucket";
-
+    String ozoneTestBucket = "ozoneBucket-Test";
 
     // Create volumes and buckets.
     TestOMRequestUtils.addVolumeToDB(volumeNameA, omMetadataManager);
     TestOMRequestUtils.addVolumeToDB(volumeNameB, omMetadataManager);
     addBucketsToCache(volumeNameA, ozoneBucket);
     addBucketsToCache(volumeNameB, hadoopBucket);
-
+    addBucketsToCache(volumeNameA, ozoneTestBucket);
 
     String prefixKeyA = "key-a";
     String prefixKeyB = "key-b";
+    String prefixKeyC = "key-c";
     TreeSet<String> keysASet = new TreeSet<>();
     TreeSet<String> keysBSet = new TreeSet<>();
+    TreeSet<String> keysCSet = new TreeSet<>();
     for (int i=1; i<= 100; i++) {
       if (i % 2 == 0) {
         keysASet.add(
@@ -349,7 +351,8 @@
         addKeysToOM(volumeNameA, hadoopBucket, prefixKeyB + i, i);
       }
     }
-
+    keysCSet.add(prefixKeyC + 1);
+    addKeysToOM(volumeNameA, ozoneTestBucket, prefixKeyC + 0, 0);
 
     TreeSet<String> keysAVolumeBSet = new TreeSet<>();
     TreeSet<String> keysBVolumeBSet = new TreeSet<>();
@@ -443,6 +446,14 @@
 
     Assert.assertEquals(omKeyInfoList.size(), 0);
 
+    // List all keys with empty prefix
+    omKeyInfoList = omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+        null, null, 100);
+    Assert.assertEquals(50, omKeyInfoList.size());
+    for (OmKeyInfo omKeyInfo : omKeyInfoList) {
+      Assert.assertTrue(omKeyInfo.getKeyName().startsWith(
+          prefixKeyA));
+    }
   }
 
   @Test
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
index ce1b2b6..ff1f9c3 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
@@ -32,6 +32,7 @@
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
 import org.apache.hadoop.ozone.OmUtils;
 import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
@@ -297,7 +298,7 @@
         OmVolumeArgs.newBuilder().setCreationTime(Time.now())
             .setVolume(volumeName).setAdminName(volumeName)
             .setOwnerName(volumeName).setQuotaInBytes(quotaInBytes)
-            .setQuotaInCounts(10000L).build();
+            .setQuotaInNamespace(10000L).build();
     omMetadataManager.getVolumeTable().put(
         omMetadataManager.getVolumeKey(volumeName), omVolumeArgs);
 
@@ -320,7 +321,7 @@
         OmVolumeArgs.newBuilder().setCreationTime(Time.now())
             .setVolume(volumeName).setAdminName(ownerName)
             .setOwnerName(ownerName).setQuotaInBytes(Long.MAX_VALUE)
-            .setQuotaInCounts(10000L).build();
+            .setQuotaInNamespace(10000L).build();
     omMetadataManager.getVolumeTable().put(
         omMetadataManager.getVolumeKey(volumeName), omVolumeArgs);
 
@@ -453,15 +454,15 @@
    * Create OMRequest for set volume property request with quota set.
    * @param volumeName
    * @param quotaInBytes
-   * @param quotaInCounts
+   * @param quotaInNamespace
    * @return OMRequest
    */
   public static OMRequest createSetVolumePropertyRequest(String volumeName,
-      long quotaInBytes, long quotaInCounts) {
+      long quotaInBytes, long quotaInNamespace) {
     SetVolumePropertyRequest setVolumePropertyRequest =
         SetVolumePropertyRequest.newBuilder().setVolumeName(volumeName)
             .setQuotaInBytes(quotaInBytes)
-            .setQuotaInCounts(quotaInCounts)
+            .setQuotaInNamespace(quotaInNamespace)
             .setModificationTime(Time.now()).build();
 
     return OMRequest.newBuilder().setClientId(UUID.randomUUID().toString())
@@ -702,7 +703,8 @@
       String adminName, String ownerName) {
     OzoneManagerProtocolProtos.VolumeInfo volumeInfo =
         OzoneManagerProtocolProtos.VolumeInfo.newBuilder().setVolume(volumeName)
-        .setAdminName(adminName).setOwnerName(ownerName).build();
+        .setAdminName(adminName).setOwnerName(ownerName)
+        .setQuotaInNamespace(OzoneConsts.QUOTA_RESET).build();
     OzoneManagerProtocolProtos.CreateVolumeRequest createVolumeRequest =
         OzoneManagerProtocolProtos.CreateVolumeRequest.newBuilder()
             .setVolumeInfo(volumeInfo).build();
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/bucket/TestOMBucketSetPropertyRequest.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/bucket/TestOMBucketSetPropertyRequest.java
index 6011a97..09c6ebc 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/bucket/TestOMBucketSetPropertyRequest.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/bucket/TestOMBucketSetPropertyRequest.java
@@ -119,7 +119,7 @@
             BucketArgs.newBuilder().setBucketName(bucketName)
                 .setVolumeName(volumeName)
                 .setQuotaInBytes(quotaInBytes)
-                .setQuotaInCounts(1000L)
+                .setQuotaInNamespace(1000L)
                 .setIsVersionEnabled(isVersionEnabled).build()))
         .setCmdType(OzoneManagerProtocolProtos.Type.SetBucketProperty)
         .setClientId(UUID.randomUUID().toString()).build();
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequest.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequest.java
index 7d8b5fc..887fbee 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequest.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequest.java
@@ -390,7 +390,7 @@
     Assert.assertNotNull(omMetadataManager.getKeyTable().get(
         omMetadataManager.getOzoneDirKey(volumeName, bucketName, keyName)));
 
-    Assert.assertEquals(1L, omMetrics.getNumKeys());
+    Assert.assertEquals(4L, omMetrics.getNumKeys());
   }
 
 
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeSetQuotaRequest.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeSetQuotaRequest.java
index 340c2f5..27116f2 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeSetQuotaRequest.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/volume/TestOMVolumeSetQuotaRequest.java
@@ -41,10 +41,10 @@
   public void testPreExecute() throws Exception {
     String volumeName = UUID.randomUUID().toString();
     long quotaInBytes = 100L;
-    long quotaInCounts = 1000L;
+    long quotaInNamespace = 1000L;
     OMRequest originalRequest =
         TestOMRequestUtils.createSetVolumePropertyRequest(volumeName,
-            quotaInBytes, quotaInCounts);
+            quotaInBytes, quotaInNamespace);
 
     OMVolumeSetQuotaRequest omVolumeSetQuotaRequest =
         new OMVolumeSetQuotaRequest(originalRequest);
@@ -59,14 +59,14 @@
     String volumeName = UUID.randomUUID().toString();
     String ownerName = "user1";
     long quotaInBytes = 100L;
-    long quotaInCounts = 1000L;
+    long quotaInNamespace = 1000L;
 
     TestOMRequestUtils.addUserToDB(volumeName, ownerName, omMetadataManager);
     TestOMRequestUtils.addVolumeToDB(volumeName, ownerName, omMetadataManager);
 
     OMRequest originalRequest =
         TestOMRequestUtils.createSetVolumePropertyRequest(volumeName,
-            quotaInBytes, quotaInCounts);
+            quotaInBytes, quotaInNamespace);
 
     OMVolumeSetQuotaRequest omVolumeSetQuotaRequest =
         new OMVolumeSetQuotaRequest(originalRequest);
@@ -81,7 +81,7 @@
     // As request is valid volume table should not have entry.
     Assert.assertNotNull(omVolumeArgs);
     long quotaBytesBeforeSet = omVolumeArgs.getQuotaInBytes();
-    long quotaCountBeforeSet = omVolumeArgs.getQuotaInCounts();
+    long quotaNamespaceBeforeSet = omVolumeArgs.getQuotaInNamespace();
 
     OMClientResponse omClientResponse =
         omVolumeSetQuotaRequest.validateAndUpdateCache(ozoneManager, 1,
@@ -96,11 +96,11 @@
 
     OmVolumeArgs ova = omMetadataManager.getVolumeTable().get(volumeKey);
     long quotaBytesAfterSet = ova.getQuotaInBytes();
-    long quotaCountAfterSet = ova.getQuotaInCounts();
+    long quotaNamespaceAfterSet = ova.getQuotaInNamespace();
     Assert.assertEquals(quotaInBytes, quotaBytesAfterSet);
-    Assert.assertEquals(quotaInCounts, quotaCountAfterSet);
+    Assert.assertEquals(quotaInNamespace, quotaNamespaceAfterSet);
     Assert.assertNotEquals(quotaBytesBeforeSet, quotaBytesAfterSet);
-    Assert.assertNotEquals(quotaCountBeforeSet, quotaCountAfterSet);
+    Assert.assertNotEquals(quotaNamespaceBeforeSet, quotaNamespaceAfterSet);
 
     // modificationTime should be greater than creationTime.
     long creationTime = omMetadataManager
@@ -115,11 +115,11 @@
       throws Exception {
     String volumeName = UUID.randomUUID().toString();
     long quotaInBytes = 100L;
-    long quotaInCounts= 100L;
+    long quotaInNamespace= 100L;
 
     OMRequest originalRequest =
         TestOMRequestUtils.createSetVolumePropertyRequest(volumeName,
-            quotaInBytes, quotaInCounts);
+            quotaInBytes, quotaInNamespace);
 
     OMVolumeSetQuotaRequest omVolumeSetQuotaRequest =
         new OMVolumeSetQuotaRequest(originalRequest);
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMAllocateBlockResponse.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMAllocateBlockResponse.java
index 494a308..602ec99 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMAllocateBlockResponse.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMAllocateBlockResponse.java
@@ -19,7 +19,6 @@
 package org.apache.hadoop.ozone.om.response.key;
 
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.util.Time;
 import org.junit.Assert;
 import org.junit.Test;
@@ -40,9 +39,6 @@
 
     OmKeyInfo omKeyInfo = TestOMRequestUtils.createOmKeyInfo(volumeName,
         bucketName, keyName, replicationType, replicationFactor);
-    OmVolumeArgs omVolumeArgs = OmVolumeArgs.newBuilder()
-        .setOwnerName(keyName).setAdminName(keyName)
-        .setVolume(volumeName).setCreationTime(Time.now()).build();
     OmBucketInfo omBucketInfo = OmBucketInfo.newBuilder()
         .setVolumeName(volumeName).setBucketName(bucketName)
         .setCreationTime(Time.now()).build();
@@ -55,7 +51,7 @@
         .build();
     OMAllocateBlockResponse omAllocateBlockResponse =
         new OMAllocateBlockResponse(omResponse, omKeyInfo, clientID,
-            omVolumeArgs, omBucketInfo);
+            omBucketInfo);
 
     String openKey = omMetadataManager.getOpenKey(volumeName, bucketName,
         keyName, clientID);
@@ -74,9 +70,6 @@
   public void testAddToDBBatchWithErrorResponse() throws Exception {
     OmKeyInfo omKeyInfo = TestOMRequestUtils.createOmKeyInfo(volumeName,
         bucketName, keyName, replicationType, replicationFactor);
-    OmVolumeArgs omVolumeArgs = OmVolumeArgs.newBuilder()
-        .setOwnerName(keyName).setAdminName(keyName)
-        .setVolume(volumeName).setCreationTime(Time.now()).build();
     OmBucketInfo omBucketInfo = OmBucketInfo.newBuilder()
         .setVolumeName(volumeName).setBucketName(bucketName)
         .setCreationTime(Time.now()).build();
@@ -89,7 +82,7 @@
         .build();
     OMAllocateBlockResponse omAllocateBlockResponse =
         new OMAllocateBlockResponse(omResponse, omKeyInfo, clientID,
-            omVolumeArgs, omBucketInfo);
+            omBucketInfo);
 
     // Before calling addToDBBatch
     String openKey = omMetadataManager.getOpenKey(volumeName, bucketName,
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyCommitResponse.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyCommitResponse.java
index ab425f2..5d2a3d8 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyCommitResponse.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyCommitResponse.java
@@ -38,9 +38,6 @@
 
     OmKeyInfo omKeyInfo = TestOMRequestUtils.createOmKeyInfo(volumeName,
         bucketName, keyName, replicationType, replicationFactor);
-    OmVolumeArgs omVolumeArgs = OmVolumeArgs.newBuilder()
-        .setOwnerName(keyName).setAdminName(keyName)
-        .setVolume(volumeName).setCreationTime(Time.now()).build();
     OmBucketInfo omBucketInfo = OmBucketInfo.newBuilder()
         .setVolumeName(volumeName).setBucketName(bucketName)
         .setCreationTime(Time.now()).build();
@@ -64,7 +61,7 @@
     String ozoneKey = omMetadataManager.getOzoneKey(volumeName, bucketName,
         keyName);
     OMKeyCommitResponse omKeyCommitResponse = new OMKeyCommitResponse(
-        omResponse, omKeyInfo, ozoneKey, openKey, omVolumeArgs, omBucketInfo);
+        omResponse, omKeyInfo, ozoneKey, openKey, omBucketInfo);
 
     omKeyCommitResponse.addToDBBatch(omMetadataManager, batchOperation);
 
@@ -102,7 +99,7 @@
         keyName);
 
     OMKeyCommitResponse omKeyCommitResponse = new OMKeyCommitResponse(
-        omResponse, omKeyInfo, ozoneKey, openKey, omVolumeArgs, omBucketInfo);
+        omResponse, omKeyInfo, ozoneKey, openKey, omBucketInfo);
 
     // As during commit Key, entry will be already there in openKeyTable.
     // Adding it here.
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyCreateResponse.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyCreateResponse.java
index 6357000..e3645ec 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyCreateResponse.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyCreateResponse.java
@@ -59,7 +59,7 @@
 
     OMKeyCreateResponse omKeyCreateResponse =
         new OMKeyCreateResponse(omResponse, omKeyInfo, null, clientID,
-            omVolumeArgs, omBucketInfo);
+            omBucketInfo);
 
     String openKey = omMetadataManager.getOpenKey(volumeName, bucketName,
         keyName, clientID);
@@ -77,9 +77,6 @@
     OmKeyInfo omKeyInfo = TestOMRequestUtils.createOmKeyInfo(volumeName,
         bucketName, keyName, replicationType, replicationFactor);
 
-    OmVolumeArgs omVolumeArgs = OmVolumeArgs.newBuilder()
-        .setOwnerName(keyName).setAdminName(keyName)
-        .setVolume(volumeName).setCreationTime(Time.now()).build();
     OmBucketInfo omBucketInfo = OmBucketInfo.newBuilder()
         .setVolumeName(volumeName).setBucketName(bucketName)
         .setCreationTime(Time.now()).build();
@@ -92,7 +89,7 @@
 
     OMKeyCreateResponse omKeyCreateResponse =
         new OMKeyCreateResponse(omResponse, omKeyInfo, null, clientID,
-            omVolumeArgs, omBucketInfo);
+            omBucketInfo);
 
     // Before calling addToDBBatch
     String openKey = omMetadataManager.getOpenKey(volumeName, bucketName,
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyDeleteResponse.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyDeleteResponse.java
index 440fa78..871e39f 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyDeleteResponse.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyDeleteResponse.java
@@ -60,7 +60,7 @@
             .build();
 
     OMKeyDeleteResponse omKeyDeleteResponse = new OMKeyDeleteResponse(
-        omResponse, omKeyInfo, true, omVolumeArgs, omBucketInfo);
+        omResponse, omKeyInfo, true, omBucketInfo);
 
     String ozoneKey = omMetadataManager.getOzoneKey(volumeName, bucketName,
         keyName);
@@ -128,7 +128,7 @@
             .build();
 
     OMKeyDeleteResponse omKeyDeleteResponse = new OMKeyDeleteResponse(
-        omResponse, omKeyInfo, true, omVolumeArgs, omBucketInfo);
+        omResponse, omKeyInfo, true, omBucketInfo);
 
     Assert.assertTrue(omMetadataManager.getKeyTable().isExist(ozoneKey));
     omKeyDeleteResponse.addToDBBatch(omMetadataManager, batchOperation);
@@ -148,9 +148,7 @@
   public void testAddToDBBatchWithErrorResponse() throws Exception {
     OmKeyInfo omKeyInfo = TestOMRequestUtils.createOmKeyInfo(volumeName,
         bucketName, keyName, replicationType, replicationFactor);
-    OmVolumeArgs omVolumeArgs = OmVolumeArgs.newBuilder()
-        .setOwnerName(keyName).setAdminName(keyName)
-        .setVolume(volumeName).setCreationTime(Time.now()).build();
+
     OmBucketInfo omBucketInfo = OmBucketInfo.newBuilder()
         .setVolumeName(volumeName).setBucketName(bucketName)
         .setCreationTime(Time.now()).build();
@@ -163,7 +161,7 @@
             .build();
 
     OMKeyDeleteResponse omKeyDeleteResponse = new OMKeyDeleteResponse(
-        omResponse, omKeyInfo, true, omVolumeArgs, omBucketInfo);
+        omResponse, omKeyInfo, true, omBucketInfo);
 
     String ozoneKey = omMetadataManager.getOzoneKey(volumeName, bucketName,
         keyName);
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeysDeleteResponse.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeysDeleteResponse.java
index e1f68ba..8951a05 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeysDeleteResponse.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeysDeleteResponse.java
@@ -85,8 +85,7 @@
         .setCreationTime(Time.now()).build();
 
     OMClientResponse omKeysDeleteResponse = new OMKeysDeleteResponse(
-        omResponse, omKeyInfoList, true,
-        omVolumeArgs, omBucketInfo);
+        omResponse, omKeyInfoList, true, omBucketInfo);
 
     omKeysDeleteResponse.checkAndUpdateDB(omMetadataManager, batchOperation);
 
@@ -113,16 +112,12 @@
             .setDeleteKeysResponse(DeleteKeysResponse.newBuilder()
                 .setStatus(false)).build();
 
-    OmVolumeArgs omVolumeArgs = OmVolumeArgs.newBuilder()
-        .setOwnerName(keyName).setAdminName(keyName)
-        .setVolume(volumeName).setCreationTime(Time.now()).build();
     OmBucketInfo omBucketInfo = OmBucketInfo.newBuilder()
         .setVolumeName(volumeName).setBucketName(bucketName)
         .setCreationTime(Time.now()).build();
 
     OMClientResponse omKeysDeleteResponse = new OMKeysDeleteResponse(
-        omResponse, omKeyInfoList, true,
-        omVolumeArgs, omBucketInfo);
+        omResponse, omKeyInfoList, true, omBucketInfo);
 
     omKeysDeleteResponse.checkAndUpdateDB(omMetadataManager, batchOperation);
 
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartResponse.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartResponse.java
index d185d0b..4f50d9e 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartResponse.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartResponse.java
@@ -24,7 +24,6 @@
 import java.util.UUID;
 
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Rule;
@@ -118,8 +117,7 @@
   }
 
   public S3MultipartUploadAbortResponse createS3AbortMPUResponse(
-      String multipartKey, long timeStamp,
-      OmMultipartKeyInfo omMultipartKeyInfo, OmVolumeArgs omVolumeArgs,
+      String multipartKey, OmMultipartKeyInfo omMultipartKeyInfo,
       OmBucketInfo omBucketInfo) {
     OMResponse omResponse = OMResponse.newBuilder()
         .setCmdType(OzoneManagerProtocolProtos.Type.AbortMultiPartUpload)
@@ -129,7 +127,7 @@
             MultipartUploadAbortResponse.newBuilder().build()).build();
 
     return new S3MultipartUploadAbortResponse(omResponse, multipartKey,
-        omMultipartKeyInfo, true, omVolumeArgs, omBucketInfo);
+        omMultipartKeyInfo, true, omBucketInfo);
   }
 
 
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartUploadAbortResponse.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartUploadAbortResponse.java
index da030a9..a11c4db 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartUploadAbortResponse.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/s3/multipart/TestS3MultipartUploadAbortResponse.java
@@ -21,7 +21,6 @@
 import java.util.UUID;
 
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
-import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
 import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
 import org.junit.Assert;
 import org.junit.Test;
@@ -48,9 +47,7 @@
     String multipartUploadID = UUID.randomUUID().toString();
     String multipartKey = omMetadataManager.getMultipartKey(volumeName,
         bucketName, keyName, multipartUploadID);
-    OmVolumeArgs omVolumeArgs = OmVolumeArgs.newBuilder()
-        .setOwnerName(keyName).setAdminName(keyName)
-        .setVolume(volumeName).setCreationTime(Time.now()).build();
+
     OmBucketInfo omBucketInfo = OmBucketInfo.newBuilder()
         .setVolumeName(volumeName).setBucketName(bucketName)
         .setCreationTime(Time.now()).build();
@@ -62,9 +59,9 @@
         batchOperation);
 
     S3MultipartUploadAbortResponse s3MultipartUploadAbortResponse =
-        createS3AbortMPUResponse(multipartKey, Time.now(),
+        createS3AbortMPUResponse(multipartKey,
             s3InitiateMultipartUploadResponse.getOmMultipartKeyInfo(),
-            omVolumeArgs, omBucketInfo);
+            omBucketInfo);
 
     s3MultipartUploadAbortResponse.addToDBBatch(omMetadataManager,
         batchOperation);
@@ -89,9 +86,7 @@
     String multipartUploadID = UUID.randomUUID().toString();
     String multipartKey = omMetadataManager.getMultipartKey(volumeName,
         bucketName, keyName, multipartUploadID);
-    OmVolumeArgs omVolumeArgs = OmVolumeArgs.newBuilder()
-        .setOwnerName(keyName).setAdminName(keyName)
-        .setVolume(volumeName).setCreationTime(Time.now()).build();
+
     OmBucketInfo omBucketInfo = OmBucketInfo.newBuilder()
         .setVolumeName(volumeName).setBucketName(bucketName)
         .setCreationTime(Time.now()).build();
@@ -120,11 +115,10 @@
     addPart(2, part2, omMultipartKeyInfo);
 
 
-    long timeStamp = Time.now();
     S3MultipartUploadAbortResponse s3MultipartUploadAbortResponse =
-        createS3AbortMPUResponse(multipartKey, timeStamp,
+        createS3AbortMPUResponse(multipartKey,
             s3InitiateMultipartUploadResponse.getOmMultipartKeyInfo(),
-            omVolumeArgs, omBucketInfo);
+            omBucketInfo);
 
     s3MultipartUploadAbortResponse.addToDBBatch(omMetadataManager,
         batchOperation);
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneBlockTokenSecretManager.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneBlockTokenSecretManager.java
index d38e805..1895aa7 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneBlockTokenSecretManager.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneBlockTokenSecretManager.java
@@ -37,7 +37,9 @@
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.ExpectedException;
 
 import java.io.ByteArrayInputStream;
 import java.io.DataInputStream;
@@ -63,6 +65,8 @@
       .getTempPath(TestOzoneBlockTokenSecretManager.class.getSimpleName());
   private BlockTokenVerifier tokenVerifier;
 
+  @Rule
+  public ExpectedException exception = ExpectedException.none();
 
   @Before
   public void setUp() throws Exception {
@@ -231,4 +235,38 @@
     tokenVerifier.verify(null, null,
         ContainerProtos.Type.CloseContainer, null);
   }
+
+  @Test
+  public void testBlockTokenReadAccessMode() throws Exception {
+    final String testUser1 = "testUser1";
+    final String testBlockId1 = "101";
+    Token<OzoneBlockTokenIdentifier> readToken =
+        secretManager.generateToken(testUser1, testBlockId1,
+            EnumSet.of(AccessModeProto.READ), 100);
+
+    exception.expect(BlockTokenException.class);
+    exception.expectMessage("doesn't have WRITE permission");
+    tokenVerifier.verify(testUser1, readToken.encodeToUrlString(),
+        ContainerProtos.Type.PutBlock, testBlockId1);
+
+    tokenVerifier.verify(testUser1, readToken.encodeToUrlString(),
+        ContainerProtos.Type.GetBlock, testBlockId1);
+  }
+
+  @Test
+  public void testBlockTokenWriteAccessMode() throws Exception {
+    final String testUser2 = "testUser2";
+    final String testBlockId2 = "102";
+    Token<OzoneBlockTokenIdentifier> writeToken =
+        secretManager.generateToken("testUser2", testBlockId2,
+            EnumSet.of(AccessModeProto.WRITE), 100);
+
+    tokenVerifier.verify(testUser2, writeToken.encodeToUrlString(),
+        ContainerProtos.Type.WriteChunk, testBlockId2);
+
+    exception.expect(BlockTokenException.class);
+    exception.expectMessage("doesn't have READ permission");
+    tokenVerifier.verify(testUser2, writeToken.encodeToUrlString(),
+        ContainerProtos.Type.ReadChunk, testBlockId2);
+  }
 }
\ No newline at end of file
diff --git a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSecretManager.java b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSecretManager.java
index 03c9732..2971ca0 100644
--- a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSecretManager.java
+++ b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/security/TestOzoneDelegationTokenSecretManager.java
@@ -53,9 +53,9 @@
 import java.util.HashMap;
 import java.util.Map;
 
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_RATIS_ENABLE_KEY;
 import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMTokenProto.Type.S3AUTHINFO;
 
-
 /**
  * Test class for {@link OzoneDelegationTokenSecretManager}.
  */
@@ -109,6 +109,14 @@
 
   private OzoneConfiguration createNewTestPath() throws IOException {
     OzoneConfiguration config = new OzoneConfiguration();
+    // When ratis is enabled, tokens are not updated to the store directly by
+    // OzoneDelegationTokenSecretManager. Tokens are updated via Ratis
+    // through the DoubleBuffer. Hence, to test
+    // OzoneDelegationTokenSecretManager, we should disable OM Ratis.
+    // TODO: Once HA and non-HA code paths are merged in
+    //  OzoneDelegationTokenSecretManager, this test should be updated to
+    //  test both ratis enabled and disabled case.
+    config.setBoolean(OZONE_OM_RATIS_ENABLE_KEY, false);
     File newFolder = folder.newFolder();
     if (!newFolder.exists()) {
       Assert.assertTrue(newFolder.mkdirs());
diff --git a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
index 0b50988..f9bee54 100644
--- a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
+++ b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
@@ -310,7 +310,7 @@
 
 
   @Override
-  public Iterator<BasicKeyInfo> listKeys(String pathKey) {
+  public Iterator<BasicKeyInfo> listKeys(String pathKey) throws IOException{
     incrementCounter(Statistic.OBJECTS_LIST, 1);
     return new IteratorAdapter(bucket.listKeys(pathKey));
   }
diff --git a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
index f6d2ef5..15adbe5 100644
--- a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
+++ b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
@@ -40,6 +40,7 @@
 import org.apache.hadoop.hdds.annotation.InterfaceStability;
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.conf.StorageUnit;
 import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource;
 import org.apache.hadoop.ozone.om.exceptions.OMException;
 import org.apache.hadoop.ozone.om.helpers.OzoneFSUtils;
@@ -71,6 +72,8 @@
 import static org.apache.hadoop.fs.ozone.Constants.OZONE_USER_DIR;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_FS_ITERATE_BATCH_SIZE;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_FS_ITERATE_BATCH_SIZE_DEFAULT;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE_DEFAULT;
 import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
 import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_SCHEME;
 
@@ -155,12 +158,7 @@
           .build();
       LOG.trace("Ozone URI for ozfs initialization is {}", uri);
 
-      ConfigurationSource source;
-      if (conf instanceof OzoneConfiguration) {
-        source = (ConfigurationSource) conf;
-      } else {
-        source = new LegacyHadoopConfigurationSource(conf);
-      }
+      ConfigurationSource source = getConfSource();
       this.adapter =
           createAdapter(source, bucketStr,
               volumeStr, omHost, omPort);
@@ -699,6 +697,12 @@
   }
 
   @Override
+  public long getDefaultBlockSize() {
+    return (long)getConfSource().getStorageSize(
+        OZONE_SCM_BLOCK_SIZE, OZONE_SCM_BLOCK_SIZE_DEFAULT, StorageUnit.BYTES);
+  }
+
+  @Override
   public FileStatus getFileStatus(Path f) throws IOException {
     incrementCounter(Statistic.INVOCATION_GET_FILE_STATUS, 1);
     statistics.incrementReadOps(1);
@@ -835,6 +839,17 @@
     }
   }
 
+  public ConfigurationSource getConfSource() {
+    Configuration conf = super.getConf();
+    ConfigurationSource source;
+    if (conf instanceof OzoneConfiguration) {
+      source = (ConfigurationSource) conf;
+    } else {
+      source = new LegacyHadoopConfigurationSource(conf);
+    }
+    return source;
+  }
+
   @Override
   public String toString() {
     return "OzoneFileSystem{URI=" + uri + ", "
diff --git a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
index 848119d..2bf6ee3 100644
--- a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
+++ b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
@@ -47,6 +47,7 @@
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.security.x509.SecurityConfig;
 import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OFSPath;
 import org.apache.hadoop.ozone.OmUtils;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.ozone.client.ObjectStore;
@@ -615,7 +616,7 @@
   }
 
   @Override
-  public Iterator<BasicKeyInfo> listKeys(String pathStr) {
+  public Iterator<BasicKeyInfo> listKeys(String pathStr) throws IOException {
     incrementCounter(Statistic.OBJECTS_LIST, 1);
     OFSPath ofsPath = new OFSPath(pathStr);
     String key = ofsPath.getKeyName();
diff --git a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
index c035abb..6fb1284 100644
--- a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
+++ b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
@@ -38,7 +38,9 @@
 import org.apache.hadoop.hdds.annotation.InterfaceStability;
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.conf.StorageUnit;
 import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource;
+import org.apache.hadoop.ozone.OFSPath;
 import org.apache.hadoop.ozone.client.OzoneBucket;
 import org.apache.hadoop.ozone.client.OzoneVolume;
 import org.apache.hadoop.ozone.om.exceptions.OMException;
@@ -68,6 +70,8 @@
 import static org.apache.hadoop.fs.ozone.Constants.OZONE_USER_DIR;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_FS_ITERATE_BATCH_SIZE;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_FS_ITERATE_BATCH_SIZE_DEFAULT;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE_DEFAULT;
 import static org.apache.hadoop.ozone.OzoneConsts.OZONE_URI_DELIMITER;
 import static org.apache.hadoop.ozone.OzoneConsts.OZONE_OFS_URI_SCHEME;
 import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.BUCKET_NOT_EMPTY;
@@ -140,12 +144,7 @@
           .build();
       LOG.trace("Ozone URI for OFS initialization is " + uri);
 
-      ConfigurationSource source;
-      if (conf instanceof OzoneConfiguration) {
-        source = (ConfigurationSource) conf;
-      } else {
-        source = new LegacyHadoopConfigurationSource(conf);
-      }
+      ConfigurationSource source = getConfSource();
       this.adapter = createAdapter(source, omHostOrServiceId, omPort);
       this.adapterImpl = (BasicRootedOzoneClientAdapterImpl) this.adapter;
 
@@ -724,6 +723,12 @@
   }
 
   @Override
+  public long getDefaultBlockSize() {
+    return (long) getConfSource().getStorageSize(
+        OZONE_SCM_BLOCK_SIZE, OZONE_SCM_BLOCK_SIZE_DEFAULT, StorageUnit.BYTES);
+  }
+
+  @Override
   public FileStatus getFileStatus(Path f) throws IOException {
     incrementCounter(Statistic.INVOCATION_GET_FILE_STATUS, 1);
     statistics.incrementReadOps(1);
@@ -875,6 +880,17 @@
         + "}";
   }
 
+  public ConfigurationSource getConfSource() {
+    Configuration conf = super.getConf();
+    ConfigurationSource source;
+    if (conf instanceof OzoneConfiguration) {
+      source = (ConfigurationSource) conf;
+    } else {
+      source = new LegacyHadoopConfigurationSource(conf);
+    }
+    return source;
+  }
+
   /**
    * This class provides an interface to iterate through all the keys in the
    * bucket prefixed with the input path key and process them.
diff --git a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/CapableOzoneFSInputStream.java b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/CapableOzoneFSInputStream.java
index cef6a58..2e8a469 100644
--- a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/CapableOzoneFSInputStream.java
+++ b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/CapableOzoneFSInputStream.java
@@ -1,4 +1,4 @@
-/**
+/*
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -34,6 +34,7 @@
   public boolean hasCapability(String capability) {
     switch (StringUtils.toLowerCase(capability)) {
     case OzoneStreamCapabilities.READBYTEBUFFER:
+    case OzoneStreamCapabilities.UNBUFFER:
       return true;
     default:
       return false;
diff --git a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapter.java b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapter.java
index 2b76c22..b9e2881 100644
--- a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapter.java
+++ b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapter.java
@@ -55,7 +55,7 @@
 
   boolean deleteObjects(List<String> keyName);
 
-  Iterator<BasicKeyInfo> listKeys(String pathKey);
+  Iterator<BasicKeyInfo> listKeys(String pathKey) throws IOException;
 
   List<FileStatusAdapter> listStatus(String keyName, boolean recursive,
       String startKey, long numEntries, URI uri,
diff --git a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSInputStream.java b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSInputStream.java
index 313ae6f..35bd0d5 100644
--- a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSInputStream.java
+++ b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneFSInputStream.java
@@ -23,6 +23,7 @@
 import java.nio.ByteBuffer;
 import java.nio.ReadOnlyBufferException;
 
+import org.apache.hadoop.fs.CanUnbuffer;
 import org.apache.hadoop.hdds.annotation.InterfaceAudience;
 import org.apache.hadoop.hdds.annotation.InterfaceStability;
 import org.apache.hadoop.fs.ByteBufferReadable;
@@ -39,7 +40,7 @@
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
 public class OzoneFSInputStream extends FSInputStream
-    implements ByteBufferReadable {
+    implements ByteBufferReadable, CanUnbuffer {
 
   private final InputStream inputStream;
   private final Statistics statistics;
@@ -123,4 +124,11 @@
 
     return bytesRead;
   }
+
+  @Override
+  public void unbuffer() {
+    if (inputStream instanceof CanUnbuffer) {
+      ((CanUnbuffer) inputStream).unbuffer();
+    }
+  }
 }
diff --git a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneStreamCapabilities.java b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneStreamCapabilities.java
index db90cd9..5dd69a4 100644
--- a/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneStreamCapabilities.java
+++ b/hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/OzoneStreamCapabilities.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.fs.ozone;
 
+import org.apache.hadoop.fs.CanUnbuffer;
+
 import java.nio.ByteBuffer;
 
 /**
@@ -35,4 +37,11 @@
    * TODO: If Hadoop dependency is upgraded, this string can be removed.
    */
   static final String READBYTEBUFFER = "in:readbytebuffer";
+
+  /**
+   * Stream unbuffer capability implemented by {@link CanUnbuffer#unbuffer()}.
+   *
+   * TODO: If Hadoop dependency is upgraded, this string can be removed.
+   */
+  static final String UNBUFFER = "in:unbuffer";
 }
diff --git a/hadoop-ozone/ozonefs-common/src/test/java/org/apache/hadoop/fs/ozone/TestBasicOzoneFileSystems.java b/hadoop-ozone/ozonefs-common/src/test/java/org/apache/hadoop/fs/ozone/TestBasicOzoneFileSystems.java
new file mode 100644
index 0000000..1db1ee5
--- /dev/null
+++ b/hadoop-ozone/ozonefs-common/src/test/java/org/apache/hadoop/fs/ozone/TestBasicOzoneFileSystems.java
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.conf.StorageSize;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.util.Arrays;
+import java.util.Collection;
+
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SCM_BLOCK_SIZE_DEFAULT;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Unit test for Basic*OzoneFileSystem.
+ */
+@RunWith(Parameterized.class)
+public class TestBasicOzoneFileSystems {
+
+  private final FileSystem subject;
+
+  @Parameterized.Parameters
+  public static Collection<Object[]> data() {
+    return Arrays.asList(
+        new Object[]{new BasicOzoneFileSystem()},
+        new Object[]{new BasicRootedOzoneFileSystem()}
+    );
+  }
+
+  public TestBasicOzoneFileSystems(FileSystem subject) {
+    this.subject = subject;
+  }
+
+  @Test
+  public void defaultBlockSize() {
+    Configuration conf = new OzoneConfiguration();
+    subject.setConf(conf);
+
+    long expected = toBytes(OZONE_SCM_BLOCK_SIZE_DEFAULT);
+    assertDefaultBlockSize(expected);
+  }
+
+  @Test
+  public void defaultBlockSizeCustomized() {
+    String customValue = "128MB";
+    Configuration conf = new OzoneConfiguration();
+    conf.set(OZONE_SCM_BLOCK_SIZE, customValue);
+    subject.setConf(conf);
+
+    assertDefaultBlockSize(toBytes(customValue));
+  }
+
+  private void assertDefaultBlockSize(long expected) {
+    assertEquals(expected, subject.getDefaultBlockSize());
+
+    Path anyPath = new Path("/");
+    assertEquals(expected, subject.getDefaultBlockSize(anyPath));
+
+    Path nonExistentFile = new Path("/no/such/file");
+    assertEquals(expected, subject.getDefaultBlockSize(nonExistentFile));
+  }
+
+  private static long toBytes(String value) {
+    StorageSize blockSize = StorageSize.parse(value);
+    return (long) blockSize.getUnit().toBytes(blockSize.getValue());
+  }
+}
diff --git a/hadoop-ozone/ozonefs-common/src/test/java/org/apache/hadoop/fs/ozone/TestOFSPath.java b/hadoop-ozone/ozonefs-common/src/test/java/org/apache/hadoop/fs/ozone/TestOFSPath.java
index afdeb51..e941fdb 100644
--- a/hadoop-ozone/ozonefs-common/src/test/java/org/apache/hadoop/fs/ozone/TestOFSPath.java
+++ b/hadoop-ozone/ozonefs-common/src/test/java/org/apache/hadoop/fs/ozone/TestOFSPath.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.fs.ozone;
 
+import org.apache.hadoop.ozone.OFSPath;
 import org.junit.Assert;
 import org.junit.Test;
 
diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index b4b91e1..d2132cd 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -26,7 +26,7 @@
 
   <properties>
     <docker.image>apache/ozone:${project.version}</docker.image>
-    <spring.version>5.2.5.RELEASE</spring.version>
+    <spring.version>5.2.11.RELEASE</spring.version>
     <jooq.version>3.11.10</jooq.version>
   </properties>
   <modules>
diff --git a/hadoop-ozone/recon-codegen/src/main/java/org/hadoop/ozone/recon/schema/ContainerSchemaDefinition.java b/hadoop-ozone/recon-codegen/src/main/java/org/hadoop/ozone/recon/schema/ContainerSchemaDefinition.java
index 1be715d..c2fade35 100644
--- a/hadoop-ozone/recon-codegen/src/main/java/org/hadoop/ozone/recon/schema/ContainerSchemaDefinition.java
+++ b/hadoop-ozone/recon-codegen/src/main/java/org/hadoop/ozone/recon/schema/ContainerSchemaDefinition.java
@@ -38,8 +38,6 @@
 @Singleton
 public class ContainerSchemaDefinition implements ReconSchemaDefinition {
 
-  public static final String CONTAINER_HISTORY_TABLE_NAME =
-      "CONTAINER_HISTORY";
   public static final String UNHEALTHY_CONTAINERS_TABLE_NAME =
       "UNHEALTHY_CONTAINERS";
 
@@ -68,29 +66,12 @@
   public void initializeSchema() throws SQLException {
     Connection conn = dataSource.getConnection();
     dslContext = DSL.using(conn);
-    if (!TABLE_EXISTS_CHECK.test(conn, CONTAINER_HISTORY_TABLE_NAME)) {
-      createContainerHistoryTable();
-    }
     if (!TABLE_EXISTS_CHECK.test(conn, UNHEALTHY_CONTAINERS_TABLE_NAME)) {
       createUnhealthyContainersTable();
     }
   }
 
   /**
-   * Create the Container History table.
-   */
-  private void createContainerHistoryTable() {
-    dslContext.createTableIfNotExists(CONTAINER_HISTORY_TABLE_NAME)
-        .column(CONTAINER_ID, SQLDataType.BIGINT.nullable(false))
-        .column("datanode_host", SQLDataType.VARCHAR(766).nullable(false))
-        .column("first_report_timestamp", SQLDataType.BIGINT)
-        .column("last_report_timestamp", SQLDataType.BIGINT)
-        .constraint(DSL.constraint("pk_container_id_datanode_host")
-            .primaryKey(CONTAINER_ID, "datanode_host"))
-        .execute();
-  }
-
-  /**
    * Create the Missing Containers table.
    */
   private void createUnhealthyContainersTable() {
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconControllerModule.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconControllerModule.java
index cb667f4..9805343 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconControllerModule.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/ReconControllerModule.java
@@ -30,7 +30,7 @@
 import org.apache.hadoop.ozone.om.protocolPB.OmTransport;
 import org.apache.hadoop.ozone.om.protocolPB.OmTransportFactory;
 import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
-import org.apache.hadoop.ozone.recon.persistence.ContainerSchemaManager;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHealthSchemaManager;
 import org.apache.hadoop.ozone.recon.persistence.DataSourceConfiguration;
 import org.apache.hadoop.ozone.recon.persistence.JooqPersistenceModule;
 import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
@@ -62,7 +62,6 @@
 import org.apache.ratis.protocol.ClientId;
 import org.hadoop.ozone.recon.codegen.ReconSqlDbConfig;
 import org.hadoop.ozone.recon.schema.tables.daos.ClusterGrowthDailyDao;
-import org.hadoop.ozone.recon.schema.tables.daos.ContainerHistoryDao;
 import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
 import org.hadoop.ozone.recon.schema.tables.daos.GlobalStatsDao;
 import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
@@ -89,7 +88,7 @@
         .to(ReconOmMetadataManagerImpl.class);
     bind(OMMetadataManager.class).to(ReconOmMetadataManagerImpl.class);
 
-    bind(ContainerSchemaManager.class).in(Singleton.class);
+    bind(ContainerHealthSchemaManager.class).in(Singleton.class);
     bind(ContainerDBServiceProvider.class)
         .to(ContainerDBServiceProviderImpl.class).in(Singleton.class);
     bind(OzoneManagerServiceProvider.class)
@@ -132,8 +131,7 @@
             ReconTaskStatusDao.class,
             UnhealthyContainersDao.class,
             GlobalStatsDao.class,
-            ClusterGrowthDailyDao.class,
-            ContainerHistoryDao.class);
+            ClusterGrowthDailyDao.class);
 
     @Override
     protected void configure() {
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ClusterStateEndpoint.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ClusterStateEndpoint.java
index ec931f4..aa90fbf 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ClusterStateEndpoint.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ClusterStateEndpoint.java
@@ -19,8 +19,8 @@
 package org.apache.hadoop.ozone.recon.api;
 
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
+import org.apache.hadoop.hdds.scm.node.NodeStatus;
 import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
 import org.apache.hadoop.ozone.recon.api.types.ClusterStateResponse;
 import org.apache.hadoop.ozone.recon.api.types.DatanodeStorageReport;
@@ -81,8 +81,8 @@
     int containers = this.containerManager.getContainerIDs().size();
     int pipelines = this.pipelineManager.getPipelines().size();
     int healthyDatanodes =
-        nodeManager.getNodeCount(NodeState.HEALTHY) +
-            nodeManager.getNodeCount(NodeState.HEALTHY_READONLY);
+        nodeManager.getNodeCount(NodeStatus.inServiceHealthy()) +
+            nodeManager.getNodeCount(NodeStatus.inServiceHealthyReadOnly());
     SCMNodeStat stats = nodeManager.getStats();
     DatanodeStorageReport storageReport =
         new DatanodeStorageReport(stats.getCapacity().get(),
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerEndpoint.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerEndpoint.java
index 1778b84..5cd6ec8 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerEndpoint.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerEndpoint.java
@@ -55,12 +55,12 @@
 import org.apache.hadoop.ozone.recon.api.types.UnhealthyContainerMetadata;
 import org.apache.hadoop.ozone.recon.api.types.UnhealthyContainersResponse;
 import org.apache.hadoop.ozone.recon.api.types.UnhealthyContainersSummary;
-import org.apache.hadoop.ozone.recon.persistence.ContainerSchemaManager;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHistory;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHealthSchemaManager;
 import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
 import org.apache.hadoop.ozone.recon.scm.ReconContainerManager;
 import org.apache.hadoop.ozone.recon.spi.ContainerDBServiceProvider;
 import org.hadoop.ozone.recon.schema.ContainerSchemaDefinition.UnHealthyContainerStates;
-import org.hadoop.ozone.recon.schema.tables.pojos.ContainerHistory;
 import org.hadoop.ozone.recon.schema.tables.pojos.UnhealthyContainers;
 
 import static org.apache.hadoop.ozone.recon.ReconConstants.DEFAULT_BATCH_NUMBER;
@@ -84,15 +84,15 @@
   @Inject
   private ReconOMMetadataManager omMetadataManager;
 
-  private ReconContainerManager containerManager;
-  private ContainerSchemaManager containerSchemaManager;
+  private final ReconContainerManager containerManager;
+  private final ContainerHealthSchemaManager containerHealthSchemaManager;
 
   @Inject
   public ContainerEndpoint(OzoneStorageContainerManager reconSCM,
-                           ContainerSchemaManager containerSchemaManager) {
+      ContainerHealthSchemaManager containerHealthSchemaManager) {
     this.containerManager =
         (ReconContainerManager) reconSCM.getContainerManager();
-    this.containerSchemaManager = containerSchemaManager;
+    this.containerHealthSchemaManager = containerHealthSchemaManager;
   }
 
   /**
@@ -226,7 +226,7 @@
   public Response getReplicaHistoryForContainer(
       @PathParam("id") Long containerID) {
     return Response.ok(
-        containerSchemaManager.getAllContainerHistory(containerID)).build();
+        containerManager.getAllContainerHistory(containerID)).build();
   }
 
   /**
@@ -240,7 +240,7 @@
   @Path("/missing")
   public Response getMissingContainers() {
     List<MissingContainerMetadata> missingContainers = new ArrayList<>();
-    containerSchemaManager.getUnhealthyContainers(
+    containerHealthSchemaManager.getUnhealthyContainers(
         UnHealthyContainerStates.MISSING, 0, Integer.MAX_VALUE)
         .forEach(container -> {
           long containerID = container.getContainerId();
@@ -251,7 +251,7 @@
             UUID pipelineID = containerInfo.getPipelineID().getId();
 
             List<ContainerHistory> datanodes =
-                containerSchemaManager.getLatestContainerHistory(containerID,
+                containerManager.getLatestContainerHistory(containerID,
                     containerInfo.getReplicationFactor().getNumber());
             missingContainers.add(new MissingContainerMetadata(containerID,
                 container.getInStateSince(), keyCount, pipelineID, datanodes));
@@ -301,8 +301,8 @@
         internalState = UnHealthyContainerStates.valueOf(state);
       }
 
-      summary = containerSchemaManager.getUnhealthyContainersSummary();
-      List<UnhealthyContainers> containers = containerSchemaManager
+      summary = containerHealthSchemaManager.getUnhealthyContainersSummary();
+      List<UnhealthyContainers> containers = containerHealthSchemaManager
           .getUnhealthyContainers(internalState, offset, limit);
       for (UnhealthyContainers c : containers) {
         long containerID = c.getContainerId();
@@ -311,8 +311,7 @@
         long keyCount = containerInfo.getNumberOfKeys();
         UUID pipelineID = containerInfo.getPipelineID().getId();
         List<ContainerHistory> datanodes =
-            containerSchemaManager.getLatestContainerHistory(
-                containerID,
+            containerManager.getLatestContainerHistory(containerID,
                 containerInfo.getReplicationFactor().getNumber());
         unhealthyMeta.add(new UnhealthyContainerMetadata(
             c, datanodes, pipelineID, keyCount));
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/NodeEndpoint.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/NodeEndpoint.java
index bd022c4..2f88497 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/NodeEndpoint.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/NodeEndpoint.java
@@ -79,7 +79,12 @@
 
     datanodeDetails.forEach(datanode -> {
       DatanodeStorageReport storageReport = getStorageReport(datanode);
-      NodeState nodeState = nodeManager.getNodeState(datanode);
+      NodeState nodeState = null;
+      try {
+        nodeState = nodeManager.getNodeStatus(datanode).getHealth();
+      } catch (NodeNotFoundException e) {
+        LOG.warn("Cannot get nodeState for datanode {}", datanode, e);
+      }
       String hostname = datanode.getHostName();
       Set<PipelineID> pipelineIDs = nodeManager.getPipelines(datanode);
       List<DatanodePipeline> pipelines = new ArrayList<>();
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/MissingContainerMetadata.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/MissingContainerMetadata.java
index 3eff647..a23fdae 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/MissingContainerMetadata.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/MissingContainerMetadata.java
@@ -17,7 +17,7 @@
  */
 package org.apache.hadoop.ozone.recon.api.types;
 
-import org.hadoop.ozone.recon.schema.tables.pojos.ContainerHistory;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHistory;
 
 import javax.xml.bind.annotation.XmlAccessType;
 import javax.xml.bind.annotation.XmlAccessorType;
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/UnhealthyContainerMetadata.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/UnhealthyContainerMetadata.java
index 370c2a6..808e85d 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/UnhealthyContainerMetadata.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/UnhealthyContainerMetadata.java
@@ -17,7 +17,7 @@
  */
 package org.apache.hadoop.ozone.recon.api.types;
 
-import org.hadoop.ozone.recon.schema.tables.pojos.ContainerHistory;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHistory;
 import org.hadoop.ozone.recon.schema.tables.pojos.UnhealthyContainers;
 
 import javax.xml.bind.annotation.XmlAccessType;
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/codec/ContainerReplicaHistoryListCodec.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/codec/ContainerReplicaHistoryListCodec.java
new file mode 100644
index 0000000..cef9ff7
--- /dev/null
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/codec/ContainerReplicaHistoryListCodec.java
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.ozone.recon.codec;
+
+import org.apache.hadoop.hdds.utils.db.Codec;
+import org.apache.hadoop.hdds.utils.db.LongCodec;
+import org.apache.hadoop.ozone.recon.scm.ContainerReplicaHistory;
+import org.apache.hadoop.ozone.recon.scm.ContainerReplicaHistoryList;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+/**
+ * Codec for ContainerReplicaHistoryList.
+ */
+public class ContainerReplicaHistoryListCodec
+    implements Codec<ContainerReplicaHistoryList> {
+
+  // UUID takes 2 long to store. Each timestamp takes 1 long to store.
+  static final int SIZE_PER_ENTRY = 4 * Long.BYTES;
+  private final Codec<Long> lc = new LongCodec();
+
+  @Override
+  public byte[] toPersistedFormat(ContainerReplicaHistoryList obj)
+      throws IOException {
+
+    List<ContainerReplicaHistory> lst = obj.getList();
+    final int sizeOfRes = SIZE_PER_ENTRY * lst.size();
+    // ByteArrayOutputStream constructor has a sanity check on size.
+    ByteArrayOutputStream out = new ByteArrayOutputStream(sizeOfRes);
+    for (ContainerReplicaHistory ts : lst) {
+      out.write(lc.toPersistedFormat(ts.getUuid().getMostSignificantBits()));
+      out.write(lc.toPersistedFormat(ts.getUuid().getLeastSignificantBits()));
+      out.write(lc.toPersistedFormat(ts.getFirstSeenTime()));
+      out.write(lc.toPersistedFormat(ts.getLastSeenTime()));
+    }
+    return out.toByteArray();
+  }
+
+  @Override
+  public ContainerReplicaHistoryList fromPersistedFormat(byte[] rawData)
+      throws IOException {
+
+    assert(rawData.length % SIZE_PER_ENTRY == 0);
+    DataInputStream in = new DataInputStream(new ByteArrayInputStream(rawData));
+    List<ContainerReplicaHistory> lst = new ArrayList<>();
+    while (in.available() > 0) {
+      final long uuidMsb = in.readLong();
+      final long uuidLsb = in.readLong();
+      final long firstSeenTime = in.readLong();
+      final long lastSeenTime = in.readLong();
+      final UUID id = new UUID(uuidMsb, uuidLsb);
+      lst.add(new ContainerReplicaHistory(id, firstSeenTime, lastSeenTime));
+    }
+    in.close();
+    return new ContainerReplicaHistoryList(lst);
+  }
+
+  @Override
+  public ContainerReplicaHistoryList copyObject(
+      ContainerReplicaHistoryList obj) {
+    return obj;
+  }
+}
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/fsck/ContainerHealthTask.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/fsck/ContainerHealthTask.java
index 315dd5c..9ceb5dd 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/fsck/ContainerHealthTask.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/fsck/ContainerHealthTask.java
@@ -28,7 +28,7 @@
 import org.apache.hadoop.hdds.scm.container.ContainerManager;
 import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
 import org.apache.hadoop.hdds.scm.container.ContainerReplica;
-import org.apache.hadoop.ozone.recon.persistence.ContainerSchemaManager;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHealthSchemaManager;
 import org.apache.hadoop.ozone.recon.scm.ReconScmTask;
 import org.apache.hadoop.ozone.recon.tasks.ReconTaskConfig;
 import org.apache.hadoop.util.Time;
@@ -50,7 +50,7 @@
       LoggerFactory.getLogger(ContainerHealthTask.class);
 
   private ContainerManager containerManager;
-  private ContainerSchemaManager containerSchemaManager;
+  private ContainerHealthSchemaManager containerHealthSchemaManager;
   private PlacementPolicy placementPolicy;
   private final long interval;
   private Set<ContainerInfo> processedContainers = new HashSet<>();
@@ -58,11 +58,11 @@
   public ContainerHealthTask(
       ContainerManager containerManager,
       ReconTaskStatusDao reconTaskStatusDao,
-      ContainerSchemaManager containerSchemaManager,
+      ContainerHealthSchemaManager containerHealthSchemaManager,
       PlacementPolicy placementPolicy,
       ReconTaskConfig reconTaskConfig) {
     super(reconTaskStatusDao);
-    this.containerSchemaManager = containerSchemaManager;
+    this.containerHealthSchemaManager = containerHealthSchemaManager;
     this.placementPolicy = placementPolicy;
     this.containerManager = containerManager;
     interval = reconTaskConfig.getMissingContainerTaskInterval().toMillis();
@@ -105,7 +105,7 @@
 
   private void completeProcessingContainer(ContainerHealthStatus container,
       Set<String> existingRecords, long currentTime) {
-    containerSchemaManager.insertUnhealthyContainerRecords(
+    containerHealthSchemaManager.insertUnhealthyContainerRecords(
         ContainerHealthRecords.generateUnhealthyRecords(
             container, existingRecords, currentTime));
     processedContainers.add(container.getContainer());
@@ -128,7 +128,7 @@
   private long processExistingDBRecords(long currentTime) {
     long recordCount = 0;
     try (Cursor<UnhealthyContainersRecord> cursor =
-             containerSchemaManager.getAllUnhealthyRecordsCursor()) {
+             containerHealthSchemaManager.getAllUnhealthyRecordsCursor()) {
       ContainerHealthStatus currentContainer = null;
       Set<String> existingRecords = new HashSet<>();
       while(cursor.hasNext()) {
@@ -176,7 +176,7 @@
       if (h.isHealthy()) {
         return;
       }
-      containerSchemaManager.insertUnhealthyContainerRecords(
+      containerHealthSchemaManager.insertUnhealthyContainerRecords(
           ContainerHealthRecords.generateUnhealthyRecords(h, currentTime));
     } catch (ContainerNotFoundException e) {
       LOG.error("Container not found while processing container in Container " +
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerSchemaManager.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerHealthSchemaManager.java
similarity index 66%
rename from hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerSchemaManager.java
rename to hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerHealthSchemaManager.java
index bf37c34..817f5f1 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerSchemaManager.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerHealthSchemaManager.java
@@ -17,7 +17,6 @@
  */
 package org.apache.hadoop.ozone.recon.persistence;
 
-import static org.hadoop.ozone.recon.schema.tables.ContainerHistoryTable.CONTAINER_HISTORY;
 import static org.hadoop.ozone.recon.schema.tables.UnhealthyContainersTable.UNHEALTHY_CONTAINERS;
 import static org.jooq.impl.DSL.count;
 
@@ -26,15 +25,12 @@
 import org.apache.hadoop.ozone.recon.api.types.UnhealthyContainersSummary;
 import org.hadoop.ozone.recon.schema.ContainerSchemaDefinition;
 import org.hadoop.ozone.recon.schema.ContainerSchemaDefinition.UnHealthyContainerStates;
-import org.hadoop.ozone.recon.schema.tables.daos.ContainerHistoryDao;
 import org.hadoop.ozone.recon.schema.tables.daos.UnhealthyContainersDao;
-import org.hadoop.ozone.recon.schema.tables.pojos.ContainerHistory;
 import org.hadoop.ozone.recon.schema.tables.pojos.UnhealthyContainers;
 import org.hadoop.ozone.recon.schema.tables.records.UnhealthyContainersRecord;
 import org.jooq.Cursor;
 import org.jooq.DSLContext;
 import org.jooq.Record;
-import org.jooq.Record2;
 import org.jooq.SelectQuery;
 import java.util.List;
 
@@ -42,16 +38,15 @@
  * Provide a high level API to access the Container Schema.
  */
 @Singleton
-public class ContainerSchemaManager {
-  private ContainerHistoryDao containerHistoryDao;
-  private UnhealthyContainersDao unhealthyContainersDao;
-  private ContainerSchemaDefinition containerSchemaDefinition;
+public class ContainerHealthSchemaManager {
+
+  private final UnhealthyContainersDao unhealthyContainersDao;
+  private final ContainerSchemaDefinition containerSchemaDefinition;
 
   @Inject
-  public ContainerSchemaManager(ContainerHistoryDao containerHistoryDao,
+  public ContainerHealthSchemaManager(
       ContainerSchemaDefinition containerSchemaDefinition,
       UnhealthyContainersDao unhealthyContainersDao) {
-    this.containerHistoryDao = containerHistoryDao;
     this.unhealthyContainersDao = unhealthyContainersDao;
     this.containerSchemaDefinition = containerSchemaDefinition;
   }
@@ -113,40 +108,4 @@
     unhealthyContainersDao.insert(recs);
   }
 
-  public void upsertContainerHistory(long containerID, String datanode,
-                                     long time) {
-    DSLContext dslContext = containerSchemaDefinition.getDSLContext();
-    Record2<Long, String> recordToFind =
-        dslContext.newRecord(
-        CONTAINER_HISTORY.CONTAINER_ID,
-        CONTAINER_HISTORY.DATANODE_HOST).value1(containerID).value2(datanode);
-    ContainerHistory newRecord = new ContainerHistory();
-    newRecord.setContainerId(containerID);
-    newRecord.setDatanodeHost(datanode);
-    newRecord.setLastReportTimestamp(time);
-    ContainerHistory record = containerHistoryDao.findById(recordToFind);
-    if (record != null) {
-      newRecord.setFirstReportTimestamp(record.getFirstReportTimestamp());
-      containerHistoryDao.update(newRecord);
-    } else {
-      newRecord.setFirstReportTimestamp(time);
-      containerHistoryDao.insert(newRecord);
-    }
-  }
-
-  public List<ContainerHistory> getAllContainerHistory(long containerID) {
-    return containerHistoryDao.fetchByContainerId(containerID);
-  }
-
-  public List<ContainerHistory> getLatestContainerHistory(long containerID,
-                                                          int limit) {
-    DSLContext dslContext = containerSchemaDefinition.getDSLContext();
-    // Get container history sorted in descending order of last report timestamp
-    return dslContext.select()
-        .from(CONTAINER_HISTORY)
-        .where(CONTAINER_HISTORY.CONTAINER_ID.eq(containerID))
-        .orderBy(CONTAINER_HISTORY.LAST_REPORT_TIMESTAMP.desc())
-        .limit(limit)
-        .fetchInto(ContainerHistory.class);
-  }
 }
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerHistory.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerHistory.java
new file mode 100644
index 0000000..805f5ae
--- /dev/null
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/ContainerHistory.java
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.hadoop.ozone.recon.persistence;
+
+import java.io.Serializable;
+
+public class ContainerHistory implements Serializable {
+
+  private long containerId;
+  private String datanodeUuid;
+  private String datanodeHost;
+  private long firstSeenTime;
+  private long lastSeenTime;
+
+  public ContainerHistory(long containerId, String datanodeUuid,
+      String datanodeHost, long firstSeenTime, long lastSeenTime) {
+    this.containerId = containerId;
+    this.datanodeUuid = datanodeUuid;
+    this.datanodeHost = datanodeHost;
+    this.firstSeenTime = firstSeenTime;
+    this.lastSeenTime = lastSeenTime;
+  }
+
+  public long getContainerId() {
+    return containerId;
+  }
+
+  public void setContainerId(long containerId) {
+    this.containerId = containerId;
+  }
+
+  public String getDatanodeUuid() {
+    return datanodeUuid;
+  }
+
+  public void setDatanodeUuid(String datanodeUuid) {
+    this.datanodeUuid = datanodeUuid;
+  }
+
+  public String getDatanodeHost() {
+    return datanodeHost;
+  }
+
+  public void setDatanodeHost(String datanodeHost) {
+    this.datanodeHost = datanodeHost;
+  }
+
+  public long getFirstSeenTime() {
+    return firstSeenTime;
+  }
+
+  public void setFirstSeenTime(long firstSeenTime) {
+    this.firstSeenTime = firstSeenTime;
+  }
+
+  public long getLastSeenTime() {
+    return lastSeenTime;
+  }
+
+  public void setLastSeenTime(long lastSeenTime) {
+    this.lastSeenTime = lastSeenTime;
+  }
+}
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ContainerReplicaHistory.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ContainerReplicaHistory.java
new file mode 100644
index 0000000..c43bf05
--- /dev/null
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ContainerReplicaHistory.java
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.scm;
+
+import java.util.UUID;
+
+/**
+ * A ContainerReplica timestamp class that tracks first and last seen time.
+ *
+ * Note this only tracks first and last seen time of a container replica.
+ * Recon does not guarantee the replica is available during the whole period
+ * from first seen time to last seen time.
+ * For example, Recon won't track records where a replica could be move out
+ * of one DN but later moved back to the same DN.
+ */
+public class ContainerReplicaHistory {
+  // Datanode UUID
+  private final UUID uuid;
+  // First reported time of the replica on this datanode
+  private final Long firstSeenTime;
+  // Last reported time of the replica
+  private Long lastSeenTime;
+
+  public ContainerReplicaHistory(UUID id, Long firstSeenTime,
+      Long lastSeenTime) {
+    this.uuid = id;
+    this.firstSeenTime = firstSeenTime;
+    this.lastSeenTime = lastSeenTime;
+  }
+
+  public UUID getUuid() {
+    return uuid;
+  }
+
+  public Long getFirstSeenTime() {
+    return firstSeenTime;
+  }
+
+  public Long getLastSeenTime() {
+    return lastSeenTime;
+  }
+
+  public void setLastSeenTime(Long lastSeenTime) {
+    this.lastSeenTime = lastSeenTime;
+  }
+}
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ContainerReplicaHistoryList.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ContainerReplicaHistoryList.java
new file mode 100644
index 0000000..fd905ec
--- /dev/null
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ContainerReplicaHistoryList.java
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.scm;
+
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * A list of ContainerReplicaHistory.
+ *
+ * For Recon DB table definition.
+ */
+public class ContainerReplicaHistoryList {
+
+  private List<ContainerReplicaHistory> tsList;
+
+  public ContainerReplicaHistoryList(
+      List<ContainerReplicaHistory> tsList) {
+    this.tsList = tsList;
+  }
+
+  public List<ContainerReplicaHistory> asList() {
+    return Collections.unmodifiableList(tsList);
+  }
+
+  public List<ContainerReplicaHistory> getList() {
+    return tsList;
+  }
+
+  public void setList(List<ContainerReplicaHistory> list) {
+    this.tsList = list;
+  }
+
+}
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconContainerManager.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconContainerManager.java
index dff4709..f80d6ad 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconContainerManager.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconContainerManager.java
@@ -18,10 +18,19 @@
 
 package org.apache.hadoop.ozone.recon.scm;
 
+import static java.util.Comparator.comparingLong;
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleEvent.FINALIZE;
 
 import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.stream.Collectors;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hdds.conf.ConfigurationSource;
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
@@ -30,13 +39,16 @@
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
 import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
 import org.apache.hadoop.hdds.scm.container.ContainerReplica;
+import org.apache.hadoop.hdds.scm.container.ContainerReplicaNotFoundException;
 import org.apache.hadoop.hdds.scm.container.SCMContainerManager;
 import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
-import org.apache.hadoop.hdds.utils.db.BatchOperationHandler;
+import org.apache.hadoop.hdds.utils.db.DBStore;
 import org.apache.hadoop.hdds.utils.db.Table;
-import org.apache.hadoop.ozone.recon.persistence.ContainerSchemaManager;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHistory;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHealthSchemaManager;
+import org.apache.hadoop.ozone.recon.spi.ContainerDBServiceProvider;
 import org.apache.hadoop.ozone.recon.spi.StorageContainerServiceProvider;
 
 import org.slf4j.Logger;
@@ -49,8 +61,12 @@
 
   private static final Logger LOG =
       LoggerFactory.getLogger(ReconContainerManager.class);
-  private StorageContainerServiceProvider scmClient;
-  private ContainerSchemaManager containerSchemaManager;
+  private final StorageContainerServiceProvider scmClient;
+  private final ContainerHealthSchemaManager containerHealthSchemaManager;
+  private final ContainerDBServiceProvider cdbServiceProvider;
+  private final Table<UUID, DatanodeDetails> nodeDB;
+  // Container ID -> Datanode UUID -> Timestamp
+  private final Map<Long, Map<UUID, ContainerReplicaHistory>> replicaHistoryMap;
 
   /**
    * Constructs a mapping class that creates mapping between container names
@@ -65,13 +81,19 @@
   public ReconContainerManager(
       ConfigurationSource conf,
       Table<ContainerID, ContainerInfo> containerStore,
-      BatchOperationHandler batchHandler,
+      DBStore batchHandler,
       PipelineManager pipelineManager,
       StorageContainerServiceProvider scm,
-      ContainerSchemaManager containerSchemaManager) throws IOException {
+      ContainerHealthSchemaManager containerHealthSchemaManager,
+      ContainerDBServiceProvider containerDBServiceProvider)
+      throws IOException {
     super(conf, containerStore, batchHandler, pipelineManager);
     this.scmClient = scm;
-    this.containerSchemaManager = containerSchemaManager;
+    this.containerHealthSchemaManager = containerHealthSchemaManager;
+    this.cdbServiceProvider = containerDBServiceProvider;
+    // batchHandler = scmDBStore
+    this.nodeDB = ReconSCMDBDefinition.NODES.getTable(batchHandler);
+    this.replicaHistoryMap = new ConcurrentHashMap<>();
   }
 
   /**
@@ -171,23 +193,182 @@
 
   /**
    * Add a container Replica for given DataNode.
-   *
-   * @param containerID
-   * @param replica
    */
   @Override
   public void updateContainerReplica(ContainerID containerID,
       ContainerReplica replica)
       throws ContainerNotFoundException {
     super.updateContainerReplica(containerID, replica);
-    // Update container_history table
-    long currentTime = System.currentTimeMillis();
-    String datanodeHost = replica.getDatanodeDetails().getHostName();
-    containerSchemaManager.upsertContainerHistory(containerID.getId(),
-        datanodeHost, currentTime);
+
+    final long currTime = System.currentTimeMillis();
+    final long id = containerID.getId();
+    final DatanodeDetails dnInfo = replica.getDatanodeDetails();
+    final UUID uuid = dnInfo.getUuid();
+
+    // Map from DataNode UUID to replica last seen time
+    final Map<UUID, ContainerReplicaHistory> replicaLastSeenMap =
+        replicaHistoryMap.get(id);
+
+    boolean flushToDB = false;
+
+    // If replica doesn't exist in in-memory map, add to DB and add to map
+    if (replicaLastSeenMap == null) {
+      // putIfAbsent to avoid TOCTOU
+      replicaHistoryMap.putIfAbsent(id,
+          new ConcurrentHashMap<UUID, ContainerReplicaHistory>() {{
+            put(uuid, new ContainerReplicaHistory(uuid, currTime, currTime));
+          }});
+      flushToDB = true;
+    } else {
+      // ContainerID exists, update timestamp in memory
+      final ContainerReplicaHistory ts = replicaLastSeenMap.get(uuid);
+      if (ts == null) {
+        // New Datanode
+        replicaLastSeenMap.put(uuid,
+            new ContainerReplicaHistory(uuid, currTime, currTime));
+        flushToDB = true;
+      } else {
+        // if the object exists, only update the last seen time field
+        ts.setLastSeenTime(currTime);
+      }
+    }
+
+    if (flushToDB) {
+      upsertContainerHistory(id, uuid, currTime);
+    }
   }
 
-  public ContainerSchemaManager getContainerSchemaManager() {
-    return containerSchemaManager;
+  /**
+   * Remove a Container Replica of a given DataNode.
+   */
+  @Override
+  public void removeContainerReplica(ContainerID containerID,
+      ContainerReplica replica) throws ContainerNotFoundException,
+      ContainerReplicaNotFoundException {
+    super.removeContainerReplica(containerID, replica);
+
+    final long id = containerID.getId();
+    final DatanodeDetails dnInfo = replica.getDatanodeDetails();
+    final UUID uuid = dnInfo.getUuid();
+
+    final Map<UUID, ContainerReplicaHistory> replicaLastSeenMap =
+        replicaHistoryMap.get(id);
+    if (replicaLastSeenMap != null) {
+      final ContainerReplicaHistory ts = replicaLastSeenMap.get(uuid);
+      if (ts != null) {
+        // Flush to DB, then remove from in-memory map
+        upsertContainerHistory(id, uuid, ts.getLastSeenTime());
+        replicaLastSeenMap.remove(uuid);
+      }
+    }
   }
+
+  @VisibleForTesting
+  public ContainerHealthSchemaManager getContainerSchemaManager() {
+    return containerHealthSchemaManager;
+  }
+
+  @VisibleForTesting
+  public Map<Long, Map<UUID, ContainerReplicaHistory>> getReplicaHistoryMap() {
+    return replicaHistoryMap;
+  }
+
+  public List<ContainerHistory> getAllContainerHistory(long containerID) {
+    // First, get the existing entries from DB
+    Map<UUID, ContainerReplicaHistory> resMap;
+    try {
+      resMap = cdbServiceProvider.getContainerReplicaHistory(containerID);
+    } catch (IOException ex) {
+      resMap = new HashMap<>();
+      LOG.debug("Unable to retrieve container replica history from RDB.");
+    }
+
+    // Then, update the entries with the latest in-memory info, if available
+    if (replicaHistoryMap != null) {
+      Map<UUID, ContainerReplicaHistory> replicaLastSeenMap =
+          replicaHistoryMap.get(containerID);
+      if (replicaLastSeenMap != null) {
+        Map<UUID, ContainerReplicaHistory> finalResMap = resMap;
+        replicaLastSeenMap.forEach((k, v) ->
+            finalResMap.merge(k, v, (old, latest) -> latest));
+        resMap = finalResMap;
+      }
+    }
+
+    // Finally, convert map to list for output
+    List<ContainerHistory> resList = new ArrayList<>();
+    for (Map.Entry<UUID, ContainerReplicaHistory> entry : resMap.entrySet()) {
+      final UUID uuid = entry.getKey();
+      String hostname = "N/A";
+      // Attempt to retrieve hostname from NODES table
+      if (nodeDB != null) {
+        try {
+          DatanodeDetails dnDetails = nodeDB.get(uuid);
+          if (dnDetails != null) {
+            hostname = dnDetails.getHostName();
+          }
+        } catch (IOException ex) {
+          LOG.debug("Unable to retrieve from NODES table of node {}. {}",
+              uuid, ex.getMessage());
+        }
+      }
+      final long firstSeenTime = entry.getValue().getFirstSeenTime();
+      final long lastSeenTime = entry.getValue().getLastSeenTime();
+      resList.add(new ContainerHistory(containerID, uuid.toString(), hostname,
+          firstSeenTime, lastSeenTime));
+    }
+    return resList;
+  }
+
+  public List<ContainerHistory> getLatestContainerHistory(long containerID,
+      int limit) {
+    List<ContainerHistory> res = getAllContainerHistory(containerID);
+    res.sort(comparingLong(ContainerHistory::getLastSeenTime).reversed());
+    return res.stream().limit(limit).collect(Collectors.toList());
+  }
+
+  /**
+   * Flush the container replica history in-memory map to DB.
+   * Expected to be called on Recon graceful shutdown.
+   * @param clearMap true to clear the in-memory map after flushing completes.
+   */
+  public void flushReplicaHistoryMapToDB(boolean clearMap) {
+    if (replicaHistoryMap == null) {
+      return;
+    }
+    synchronized (replicaHistoryMap) {
+      try {
+        cdbServiceProvider.batchStoreContainerReplicaHistory(replicaHistoryMap);
+      } catch (IOException e) {
+        LOG.debug("Error flushing container replica history to DB. {}",
+            e.getMessage());
+      }
+      if (clearMap) {
+        replicaHistoryMap.clear();
+      }
+    }
+  }
+
+  public void upsertContainerHistory(long containerID, UUID uuid, long time) {
+    Map<UUID, ContainerReplicaHistory> tsMap;
+    try {
+      tsMap = cdbServiceProvider.getContainerReplicaHistory(containerID);
+      ContainerReplicaHistory ts = tsMap.get(uuid);
+      if (ts == null) {
+        // New entry
+        tsMap.put(uuid, new ContainerReplicaHistory(uuid, time, time));
+      } else {
+        // Entry exists, update last seen time and put it back to DB.
+        ts.setLastSeenTime(time);
+      }
+      cdbServiceProvider.storeContainerReplicaHistory(containerID, tsMap);
+    } catch (IOException e) {
+      LOG.debug("Error on DB operations. {}", e.getMessage());
+    }
+  }
+
+  public Table<UUID, DatanodeDetails> getNodeDB() {
+    return nodeDB;
+  }
+
 }
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconStorageContainerManagerFacade.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconStorageContainerManagerFacade.java
index f413ec3..b9e1313 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconStorageContainerManagerFacade.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconStorageContainerManagerFacade.java
@@ -55,7 +55,8 @@
 import org.apache.hadoop.hdds.utils.db.DBStoreBuilder;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.ozone.recon.fsck.ContainerHealthTask;
-import org.apache.hadoop.ozone.recon.persistence.ContainerSchemaManager;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHealthSchemaManager;
+import org.apache.hadoop.ozone.recon.spi.ContainerDBServiceProvider;
 import org.apache.hadoop.ozone.recon.spi.StorageContainerServiceProvider;
 import org.apache.hadoop.ozone.recon.tasks.ReconTaskConfig;
 import com.google.inject.Inject;
@@ -82,7 +83,7 @@
 
   private ReconNodeManager nodeManager;
   private ReconPipelineManager pipelineManager;
-  private ContainerManager containerManager;
+  private ReconContainerManager containerManager;
   private NetworkTopology clusterMap;
   private StorageContainerServiceProvider scmServiceProvider;
   private Set<ReconScmTask> reconScmTasks = new HashSet<>();
@@ -94,7 +95,8 @@
   public ReconStorageContainerManagerFacade(OzoneConfiguration conf,
       StorageContainerServiceProvider scmServiceProvider,
       ReconTaskStatusDao reconTaskStatusDao,
-      ContainerSchemaManager containerSchemaManager)
+      ContainerHealthSchemaManager containerHealthSchemaManager,
+      ContainerDBServiceProvider containerDBServiceProvider)
       throws IOException {
     this.eventQueue = new EventQueue();
     eventQueue.setSilent(true);
@@ -117,17 +119,14 @@
     this.datanodeProtocolServer = new ReconDatanodeProtocolServer(
         conf, this, eventQueue);
     this.pipelineManager =
-
         new ReconPipelineManager(conf,
             nodeManager,
             ReconSCMDBDefinition.PIPELINES.getTable(dbStore),
             eventQueue);
     this.containerManager = new ReconContainerManager(conf,
         ReconSCMDBDefinition.CONTAINERS.getTable(dbStore),
-        dbStore,
-        pipelineManager,
-        scmServiceProvider,
-        containerSchemaManager);
+        dbStore, pipelineManager, scmServiceProvider,
+        containerHealthSchemaManager, containerDBServiceProvider);
     this.scmServiceProvider = scmServiceProvider;
 
     NodeReportHandler nodeReportHandler =
@@ -177,8 +176,7 @@
         reconTaskConfig));
     reconScmTasks.add(new ContainerHealthTask(
         containerManager,
-        reconTaskStatusDao,
-        containerSchemaManager,
+        reconTaskStatusDao, containerHealthSchemaManager,
         containerPlacementPolicy,
         reconTaskConfig));
   }
@@ -245,6 +243,8 @@
     IOUtils.cleanupWithLogger(LOG, nodeManager);
     IOUtils.cleanupWithLogger(LOG, containerManager);
     IOUtils.cleanupWithLogger(LOG, pipelineManager);
+    LOG.info("Flushing container replica history to DB.");
+    containerManager.flushReplicaHistoryMapToDB(true);
     try {
       dbStore.close();
     } catch (Exception e) {
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
index 8e7267d..df771a6 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
@@ -20,11 +20,13 @@
 
 import java.io.IOException;
 import java.util.Map;
+import java.util.UUID;
 
 import org.apache.hadoop.hdds.annotation.InterfaceStability;
 import org.apache.hadoop.ozone.recon.api.types.ContainerKeyPrefix;
 import org.apache.hadoop.ozone.recon.api.types.ContainerMetadata;
 import org.apache.hadoop.hdds.utils.db.TableIterator;
+import org.apache.hadoop.ozone.recon.scm.ContainerReplicaHistory;
 
 /**
  * The Recon Container DB Service interface.
@@ -66,6 +68,27 @@
   void storeContainerKeyCount(Long containerID, Long count) throws IOException;
 
   /**
+   * Store the containerID -> ContainerReplicaWithTimestamp mapping to the
+   * container DB store.
+   *
+   * @param containerID the containerID.
+   * @param tsMap A map from datanode UUID to ContainerReplicaWithTimestamp.
+   * @throws IOException
+   */
+  void storeContainerReplicaHistory(Long containerID,
+      Map<UUID, ContainerReplicaHistory> tsMap) throws IOException;
+
+  /**
+   * Batch version of storeContainerReplicaHistory.
+   *
+   * @param replicaHistoryMap Replica history map
+   * @throws IOException
+   */
+  void batchStoreContainerReplicaHistory(
+      Map<Long, Map<UUID, ContainerReplicaHistory>> replicaHistoryMap)
+      throws IOException;
+
+  /**
    * Store the total count of containers into the container DB store.
    *
    * @param count count of the containers present in the system.
@@ -91,6 +114,16 @@
   long getKeyCountForContainer(Long containerID) throws IOException;
 
   /**
+   * Get the container replica history of the given containerID.
+   *
+   * @param containerID the given containerId.
+   * @return A map of ContainerReplicaWithTimestamp of the given containerID.
+   * @throws IOException
+   */
+  Map<UUID, ContainerReplicaHistory> getContainerReplicaHistory(
+      Long containerID) throws IOException;
+
+  /**
    * Get if a containerID exists or not.
    *
    * @param containerID the given containerID.
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
index 6360cf2..f613558 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
@@ -22,11 +22,16 @@
 import static org.apache.hadoop.ozone.recon.spi.impl.ReconContainerDBProvider.getNewDBStore;
 import static org.apache.hadoop.ozone.recon.spi.impl.ReconDBDefinition.CONTAINER_KEY;
 import static org.apache.hadoop.ozone.recon.spi.impl.ReconDBDefinition.CONTAINER_KEY_COUNT;
+import static org.apache.hadoop.ozone.recon.spi.impl.ReconDBDefinition.REPLICA_HISTORY;
 
 import java.io.File;
 import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
 import java.util.LinkedHashMap;
+import java.util.List;
 import java.util.Map;
+import java.util.UUID;
 
 import javax.inject.Inject;
 import javax.inject.Singleton;
@@ -34,9 +39,12 @@
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
 import org.apache.hadoop.ozone.recon.ReconUtils;
 import org.apache.hadoop.ozone.recon.api.types.ContainerKeyPrefix;
 import org.apache.hadoop.ozone.recon.api.types.ContainerMetadata;
+import org.apache.hadoop.ozone.recon.scm.ContainerReplicaHistory;
+import org.apache.hadoop.ozone.recon.scm.ContainerReplicaHistoryList;
 import org.apache.hadoop.ozone.recon.spi.ContainerDBServiceProvider;
 import org.apache.hadoop.hdds.utils.db.DBStore;
 import org.apache.hadoop.hdds.utils.db.Table;
@@ -60,6 +68,8 @@
 
   private Table<ContainerKeyPrefix, Integer> containerKeyTable;
   private Table<Long, Long> containerKeyCountTable;
+  private Table<Long, ContainerReplicaHistoryList>
+      containerReplicaHistoryTable;
   private GlobalStatsDao globalStatsDao;
 
   @Inject
@@ -138,6 +148,8 @@
       this.containerKeyTable = CONTAINER_KEY.getTable(containerDbStore);
       this.containerKeyCountTable =
           CONTAINER_KEY_COUNT.getTable(containerDbStore);
+      this.containerReplicaHistoryTable =
+          REPLICA_HISTORY.getTable(containerDbStore);
     } catch (IOException e) {
       LOG.error("Unable to create Container Key tables.", e);
     }
@@ -172,6 +184,55 @@
   }
 
   /**
+   * Store the ContainerID -> ContainerReplicaHistory (container first and last
+   * seen time) mapping to the container DB store.
+   *
+   * @param containerID the containerID.
+   * @param tsMap A map from Datanode UUID to ContainerReplicaHistory.
+   * @throws IOException
+   */
+  @Override
+  public void storeContainerReplicaHistory(Long containerID,
+      Map<UUID, ContainerReplicaHistory> tsMap) throws IOException {
+    List<ContainerReplicaHistory> tsList = new ArrayList<>();
+    for (Map.Entry<UUID, ContainerReplicaHistory> e : tsMap.entrySet()) {
+      tsList.add(e.getValue());
+    }
+
+    containerReplicaHistoryTable.put(containerID,
+        new ContainerReplicaHistoryList(tsList));
+  }
+
+  /**
+   * Batch version of storeContainerReplicaHistory.
+   *
+   * @param replicaHistoryMap Replica history map
+   * @throws IOException
+   */
+  @Override
+  public void batchStoreContainerReplicaHistory(
+      Map<Long, Map<UUID, ContainerReplicaHistory>> replicaHistoryMap)
+      throws IOException {
+    BatchOperation batchOperation = containerDbStore.initBatchOperation();
+
+    for (Map.Entry<Long, Map<UUID, ContainerReplicaHistory>> entry :
+        replicaHistoryMap.entrySet()) {
+      final long containerId = entry.getKey();
+      final Map<UUID, ContainerReplicaHistory> tsMap = entry.getValue();
+
+      List<ContainerReplicaHistory> tsList = new ArrayList<>();
+      for (Map.Entry<UUID, ContainerReplicaHistory> e : tsMap.entrySet()) {
+        tsList.add(e.getValue());
+      }
+
+      containerReplicaHistoryTable.putWithBatch(batchOperation, containerId,
+          new ContainerReplicaHistoryList(tsList));
+    }
+
+    containerDbStore.commitBatchOperation(batchOperation);
+  }
+
+  /**
    * Get the total count of keys within the given containerID.
    *
    * @param containerID the given containerID.
@@ -185,6 +246,34 @@
   }
 
   /**
+   * Get the container replica history of the given containerID.
+   *
+   * @param containerID the given containerId.
+   * @return A map of ContainerReplicaWithTimestamp of the given containerID.
+   * @throws IOException
+   */
+  @Override
+  public Map<UUID, ContainerReplicaHistory> getContainerReplicaHistory(
+      Long containerID) throws IOException {
+
+    final ContainerReplicaHistoryList tsList =
+        containerReplicaHistoryTable.get(containerID);
+    if (tsList == null) {
+      // DB doesn't have an existing entry for the containerID, return empty map
+      return new HashMap<>();
+    }
+
+    Map<UUID, ContainerReplicaHistory> res = new HashMap<>();
+    // Populate result map with entries from the DB.
+    // The list should be fairly short (< 10 entries).
+    for (ContainerReplicaHistory ts : tsList.getList()) {
+      final UUID uuid = ts.getUuid();
+      res.put(uuid, ts);
+    }
+    return res;
+  }
+
+  /**
    * Get if a containerID exists or not.
    *
    * @param containerID the given containerID.
@@ -396,4 +485,4 @@
     long containersCount = getCountForContainers();
     storeContainerCount(containersCount + count);
   }
-}
\ No newline at end of file
+}
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ReconDBDefinition.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ReconDBDefinition.java
index 4f5a4c7..a9b23ce 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ReconDBDefinition.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ReconDBDefinition.java
@@ -24,6 +24,8 @@
 import org.apache.hadoop.hdds.utils.db.LongCodec;
 import org.apache.hadoop.ozone.recon.ReconServerConfigKeys;
 import org.apache.hadoop.ozone.recon.api.types.ContainerKeyPrefix;
+import org.apache.hadoop.ozone.recon.codec.ContainerReplicaHistoryListCodec;
+import org.apache.hadoop.ozone.recon.scm.ContainerReplicaHistoryList;
 
 /**
  * RocksDB definition for the DB internal to Recon.
@@ -54,6 +56,15 @@
           Long.class,
           new LongCodec());
 
+  public static final DBColumnFamilyDefinition
+      <Long, ContainerReplicaHistoryList> REPLICA_HISTORY =
+      new DBColumnFamilyDefinition<Long, ContainerReplicaHistoryList>(
+          "replica_history",
+          Long.class,
+          new LongCodec(),
+          ContainerReplicaHistoryList.class,
+          new ContainerReplicaHistoryListCodec());
+
   @Override
   public String getName() {
     return dbName;
@@ -66,6 +77,7 @@
 
   @Override
   public DBColumnFamilyDefinition[] getColumnFamilies() {
-    return new DBColumnFamilyDefinition[] {CONTAINER_KEY, CONTAINER_KEY_COUNT};
+    return new DBColumnFamilyDefinition[] {
+        CONTAINER_KEY, CONTAINER_KEY_COUNT, REPLICA_HISTORY};
   }
 }
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ContainerKeyMapperTask.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ContainerKeyMapperTask.java
index af67aeb..af38d52 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ContainerKeyMapperTask.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ContainerKeyMapperTask.java
@@ -104,7 +104,6 @@
     return "ContainerKeyMapperTask";
   }
 
-  @Override
   public Collection<String> getTaskTables() {
     return Collections.singletonList(KEY_TABLE);
   }
@@ -113,8 +112,14 @@
   public Pair<String, Boolean> process(OMUpdateEventBatch events) {
     Iterator<OMDBUpdateEvent> eventIterator = events.getIterator();
     int eventCount = 0;
+    final Collection<String> taskTables = getTaskTables();
+
     while (eventIterator.hasNext()) {
       OMDBUpdateEvent<String, OmKeyInfo> omdbUpdateEvent = eventIterator.next();
+      // Filter event inside process method to avoid duping
+      if (!taskTables.contains(omdbUpdateEvent.getTable())) {
+        continue;
+      }
       String updatedKey = omdbUpdateEvent.getKey();
       OmKeyInfo updatedKeyValue = omdbUpdateEvent.getValue();
       try {
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
index e0a592b..e14096a 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
@@ -110,7 +110,6 @@
     return "FileSizeCountTask";
   }
 
-  @Override
   public Collection<String> getTaskTables() {
     return Collections.singletonList(KEY_TABLE);
   }
@@ -126,9 +125,14 @@
   public Pair<String, Boolean> process(OMUpdateEventBatch events) {
     Iterator<OMDBUpdateEvent> eventIterator = events.getIterator();
     Map<FileSizeCountKey, Long> fileSizeCountMap = new HashMap<>();
+    final Collection<String> taskTables = getTaskTables();
 
     while (eventIterator.hasNext()) {
       OMDBUpdateEvent<String, OmKeyInfo> omdbUpdateEvent = eventIterator.next();
+      // Filter event inside process method to avoid duping
+      if (!taskTables.contains(omdbUpdateEvent.getTable())) {
+        continue;
+      }
       String updatedKey = omdbUpdateEvent.getKey();
       OmKeyInfo omKeyInfo = omdbUpdateEvent.getValue();
 
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMUpdateEventBatch.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMUpdateEventBatch.java
index cc40811..3ed50a4 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMUpdateEventBatch.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMUpdateEventBatch.java
@@ -18,21 +18,18 @@
 
 package org.apache.hadoop.ozone.recon.tasks;
 
-import java.util.ArrayList;
-import java.util.Collection;
 import java.util.Iterator;
 import java.util.List;
-import java.util.stream.Collectors;
 
 /**
  * Wrapper class to hold multiple OM DB update events.
  */
 public class OMUpdateEventBatch {
 
-  private List<OMDBUpdateEvent> events;
+  private final List<OMDBUpdateEvent> events;
 
-  public OMUpdateEventBatch(Collection<OMDBUpdateEvent> e) {
-    events = new ArrayList<>(e);
+  public OMUpdateEventBatch(List<OMDBUpdateEvent> e) {
+    events = e;
   }
 
   /**
@@ -56,18 +53,6 @@
   }
 
   /**
-   * Filter events based on Tables.
-   * @param tables set of tables to filter on.
-   * @return trimmed event batch.
-   */
-  public OMUpdateEventBatch filter(Collection<String> tables) {
-    return new OMUpdateEventBatch(events
-        .stream()
-        .filter(e -> tables.contains(e.getTable()))
-        .collect(Collectors.toList()));
-  }
-
-  /**
    * Return if empty.
    * @return true if empty, else false.
    */
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconOmTask.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconOmTask.java
index d426bb39..e904334 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconOmTask.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconOmTask.java
@@ -18,8 +18,6 @@
 
 package org.apache.hadoop.ozone.recon.tasks;
 
-import java.util.Collection;
-
 import org.apache.commons.lang3.tuple.Pair;
 import org.apache.hadoop.ozone.om.OMMetadataManager;
 
@@ -35,13 +33,6 @@
   String getTaskName();
 
   /**
-   * Return the list of tables that the task is listening on.
-   * Empty list means the task is NOT listening on any tables.
-   * @return Collection of Tables.
-   */
-  Collection<String> getTaskTables();
-
-  /**
    * Process a set of OM events on tables that the task is listening on.
    * @param events Set of events to be processed by the task.
    * @return Pair of task name -> task success.
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
index 4409853..38d8709 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ReconTaskControllerImpl.java
@@ -109,8 +109,8 @@
         for (Map.Entry<String, ReconOmTask> taskEntry :
             reconOmTasks.entrySet()) {
           ReconOmTask task = taskEntry.getValue();
-          Collection<String> tables = task.getTaskTables();
-          tasks.add(() -> task.process(events.filter(tables)));
+          // events passed to process method is no longer filtered
+          tasks.add(() -> task.process(events));
         }
 
         List<Future<Pair<String, Boolean>>> results =
@@ -123,8 +123,8 @@
           tasks.clear();
           for (String taskName : failedTasks) {
             ReconOmTask task = reconOmTasks.get(taskName);
-            Collection<String> tables = task.getTaskTables();
-            tasks.add(() -> task.process(events.filter(tables)));
+            // events passed to process method is no longer filtered
+            tasks.add(() -> task.process(events));
           }
           results = executorService.invokeAll(tasks);
           retryFailedTasks = processTaskResults(results, events);
diff --git a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/TableCountTask.java b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/TableCountTask.java
index 2621529..79b28fe 100644
--- a/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/TableCountTask.java
+++ b/hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/TableCountTask.java
@@ -99,7 +99,6 @@
     return "TableCountTask";
   }
 
-  @Override
   public Collection<String> getTaskTables() {
     return new ArrayList<>(reconOMMetadataManager.listTableNames());
   }
@@ -114,11 +113,15 @@
   @Override
   public Pair<String, Boolean> process(OMUpdateEventBatch events) {
     Iterator<OMDBUpdateEvent> eventIterator = events.getIterator();
-
     HashMap<String, Long> objectCountMap = initializeCountMap();
+    final Collection<String> taskTables = getTaskTables();
 
     while (eventIterator.hasNext()) {
       OMDBUpdateEvent<String, Object> omdbUpdateEvent = eventIterator.next();
+      // Filter event inside process method to avoid duping
+      if (!taskTables.contains(omdbUpdateEvent.getTable())) {
+        continue;
+      }
       String rowKey = getRowKeyFromTable(omdbUpdateEvent.getTable());
       try{
         switch (omdbUpdateEvent.getAction()) {
diff --git a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerEndpoint.java b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerEndpoint.java
index 6ba6f56..49aa306 100644
--- a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerEndpoint.java
+++ b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerEndpoint.java
@@ -30,6 +30,7 @@
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
+import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -39,16 +40,18 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
+import java.util.UUID;
 import java.util.stream.Collectors;
 import javax.ws.rs.WebApplicationException;
 import javax.ws.rs.core.Response;
 
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
-import org.apache.hadoop.hdds.scm.container.ContainerManager;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
 import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
 import org.apache.hadoop.hdds.scm.server.OzoneStorageContainerManager;
@@ -64,7 +67,8 @@
 import org.apache.hadoop.ozone.recon.api.types.MissingContainersResponse;
 import org.apache.hadoop.ozone.recon.api.types.UnhealthyContainerMetadata;
 import org.apache.hadoop.ozone.recon.api.types.UnhealthyContainersResponse;
-import org.apache.hadoop.ozone.recon.persistence.ContainerSchemaManager;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHistory;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHealthSchemaManager;
 import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
 import org.apache.hadoop.ozone.recon.scm.ReconContainerManager;
 import org.apache.hadoop.ozone.recon.scm.ReconStorageContainerManagerFacade;
@@ -76,14 +80,12 @@
 import org.apache.hadoop.hdds.utils.db.Table;
 import org.hadoop.ozone.recon.schema.ContainerSchemaDefinition;
 import org.hadoop.ozone.recon.schema.ContainerSchemaDefinition.UnHealthyContainerStates;
-import org.hadoop.ozone.recon.schema.tables.pojos.ContainerHistory;
 import org.hadoop.ozone.recon.schema.tables.pojos.UnhealthyContainers;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.TemporaryFolder;
-import org.mockito.Mockito;
 
 /**
  * Test for container endpoint.
@@ -93,15 +95,22 @@
   @Rule
   public TemporaryFolder temporaryFolder = new TemporaryFolder();
 
+  private OzoneStorageContainerManager ozoneStorageContainerManager;
+  private ReconContainerManager reconContainerManager;
   private ContainerDBServiceProvider containerDbServiceProvider;
   private ContainerEndpoint containerEndpoint;
   private boolean isSetupDone = false;
-  private ContainerSchemaManager containerSchemaManager;
+  private ContainerHealthSchemaManager containerHealthSchemaManager;
   private ReconOMMetadataManager reconOMMetadataManager;
   private ContainerID containerID = new ContainerID(1L);
   private PipelineID pipelineID;
   private long keyCount = 5L;
 
+  private UUID uuid1;
+  private UUID uuid2;
+  private UUID uuid3;
+  private UUID uuid4;
+
   private void initializeInjector() throws Exception {
     reconOMMetadataManager = getTestReconOmMetadataManager(
         initializeNewOmMetadataManager(temporaryFolder.newFolder()),
@@ -110,41 +119,31 @@
     Pipeline pipeline = getRandomPipeline();
     pipelineID = pipeline.getId();
 
-    // Mock ReconStorageContainerManagerFacade and other SCM related methods
-    OzoneStorageContainerManager mockReconSCM =
-        mock(ReconStorageContainerManagerFacade.class);
-    ContainerManager mockContainerManager =
-        mock(ReconContainerManager.class);
-
-    when(mockContainerManager.getContainer(Mockito.any(ContainerID.class)))
-        .thenReturn(
-        new ContainerInfo.Builder()
-            .setContainerID(containerID.getId())
-            .setNumberOfKeys(keyCount)
-            .setReplicationFactor(ReplicationFactor.THREE)
-            .setPipelineID(pipelineID)
-            .build());
-    when(mockReconSCM.getContainerManager())
-        .thenReturn(mockContainerManager);
-
     ReconTestInjector reconTestInjector =
         new ReconTestInjector.Builder(temporaryFolder)
             .withReconSqlDb()
             .withReconOm(reconOMMetadataManager)
             .withOmServiceProvider(mock(OzoneManagerServiceProviderImpl.class))
-            .withReconScm(mockReconSCM)
+            // No longer using mock reconSCM as we need nodeDB in Facade
+            //  to establish datanode UUID to hostname mapping
+            .addBinding(OzoneStorageContainerManager.class,
+                ReconStorageContainerManagerFacade.class)
             .withContainerDB()
             .addBinding(StorageContainerServiceProvider.class,
                 mock(StorageContainerServiceProviderImpl.class))
             .addBinding(ContainerEndpoint.class)
-            .addBinding(ContainerSchemaManager.class)
+            .addBinding(ContainerHealthSchemaManager.class)
             .build();
 
+    ozoneStorageContainerManager =
+        reconTestInjector.getInstance(OzoneStorageContainerManager.class);
+    reconContainerManager = (ReconContainerManager)
+        ozoneStorageContainerManager.getContainerManager();
     containerDbServiceProvider =
         reconTestInjector.getInstance(ContainerDBServiceProvider.class);
     containerEndpoint = reconTestInjector.getInstance(ContainerEndpoint.class);
-    containerSchemaManager =
-        reconTestInjector.getInstance(ContainerSchemaManager.class);
+    containerHealthSchemaManager =
+        reconTestInjector.getInstance(ContainerHealthSchemaManager.class);
   }
 
   @Before
@@ -230,7 +229,6 @@
 
   @Test
   public void testGetKeysForContainer() {
-
     Response response = containerEndpoint.getKeysForContainer(1L, -1, "");
 
     KeysResponse data = (KeysResponse) response.getEntity();
@@ -323,7 +321,6 @@
 
   @Test
   public void testGetContainers() {
-
     Response response = containerEndpoint.getContainers(-1, 0L);
 
     ContainersResponse responseObject =
@@ -404,7 +401,7 @@
   }
 
   @Test
-  public void testGetMissingContainers() {
+  public void testGetMissingContainers() throws IOException {
     Response response = containerEndpoint.getMissingContainers();
 
     MissingContainersResponse responseObject =
@@ -426,12 +423,18 @@
     ArrayList<UnhealthyContainers> missingList =
         new ArrayList<UnhealthyContainers>();
     missingList.add(missing);
-    containerSchemaManager.insertUnhealthyContainerRecords(missingList);
+    containerHealthSchemaManager.insertUnhealthyContainerRecords(missingList);
+
+    putContainerInfos(1);
     // Add container history for id 1
-    containerSchemaManager.upsertContainerHistory(1L, "host1", 1L);
-    containerSchemaManager.upsertContainerHistory(1L, "host2", 2L);
-    containerSchemaManager.upsertContainerHistory(1L, "host3", 3L);
-    containerSchemaManager.upsertContainerHistory(1L, "host4", 4L);
+    final UUID u1 = newDatanode("host1", "127.0.0.1");
+    final UUID u2 = newDatanode("host2", "127.0.0.2");
+    final UUID u3 = newDatanode("host3", "127.0.0.3");
+    final UUID u4 = newDatanode("host4", "127.0.0.4");
+    reconContainerManager.upsertContainerHistory(1L, u1, 1L);
+    reconContainerManager.upsertContainerHistory(1L, u2, 2L);
+    reconContainerManager.upsertContainerHistory(1L, u3, 3L);
+    reconContainerManager.upsertContainerHistory(1L, u4, 4L);
 
     response = containerEndpoint.getMissingContainers();
     responseObject = (MissingContainersResponse) response.getEntity();
@@ -454,8 +457,29 @@
     });
   }
 
+  ContainerInfo newContainerInfo(long containerId) {
+    return new ContainerInfo.Builder()
+        .setContainerID(containerId)
+        .setReplicationType(HddsProtos.ReplicationType.RATIS)
+        .setState(HddsProtos.LifeCycleState.OPEN)
+        .setOwner("owner1")
+        .setNumberOfKeys(keyCount)
+        .setReplicationFactor(ReplicationFactor.THREE)
+        .setPipelineID(pipelineID)
+        .build();
+  }
+
+  void putContainerInfos(int num) throws IOException {
+    for (int i = 1; i <= num; i++) {
+      final ContainerInfo info = newContainerInfo(i);
+      reconContainerManager.getContainerStore().put(new ContainerID(i), info);
+      reconContainerManager.getContainerStateManager().addContainerInfo(
+          i, info, null, null);
+    }
+  }
+
   @Test
-  public void testUnhealthyContainers() {
+  public void testUnhealthyContainers() throws IOException {
     Response response = containerEndpoint.getUnhealthyContainers(1000, 1);
 
     UnhealthyContainersResponse responseObject =
@@ -468,6 +492,11 @@
 
     assertEquals(Collections.EMPTY_LIST, responseObject.getContainers());
 
+    putContainerInfos(14);
+    uuid1 = newDatanode("host1", "127.0.0.1");
+    uuid2 = newDatanode("host2", "127.0.0.2");
+    uuid3 = newDatanode("host3", "127.0.0.3");
+    uuid4 = newDatanode("host4", "127.0.0.4");
     createUnhealthyRecords(5, 4, 3, 2);
 
     response = containerEndpoint.getUnhealthyContainers(1000, 1);
@@ -544,7 +573,7 @@
   }
 
   @Test
-  public void testUnhealthyContainersFilteredResponse() {
+  public void testUnhealthyContainersFilteredResponse() throws IOException {
     String missing =  UnHealthyContainerStates.MISSING.toString();
 
     Response response = containerEndpoint
@@ -559,9 +588,14 @@
     assertEquals(0, responseObject.getMisReplicatedCount());
     assertEquals(Collections.EMPTY_LIST, responseObject.getContainers());
 
+    putContainerInfos(5);
+    uuid1 = newDatanode("host1", "127.0.0.1");
+    uuid2 = newDatanode("host2", "127.0.0.2");
+    uuid3 = newDatanode("host3", "127.0.0.3");
+    uuid4 = newDatanode("host4", "127.0.0.4");
     createUnhealthyRecords(5, 4, 3, 2);
 
-    response =  containerEndpoint.getUnhealthyContainers(missing, 1000, 1);
+    response = containerEndpoint.getUnhealthyContainers(missing, 1000, 1);
 
     responseObject = (UnhealthyContainersResponse) response.getEntity();
     // Summary should have the count for all unhealthy:
@@ -592,7 +626,12 @@
   }
 
   @Test
-  public void testUnhealthyContainersPaging() {
+  public void testUnhealthyContainersPaging() throws IOException {
+    putContainerInfos(6);
+    uuid1 = newDatanode("host1", "127.0.0.1");
+    uuid2 = newDatanode("host2", "127.0.0.2");
+    uuid3 = newDatanode("host3", "127.0.0.3");
+    uuid4 = newDatanode("host4", "127.0.0.4");
     createUnhealthyRecords(5, 4, 3, 2);
     UnhealthyContainersResponse firstBatch =
         (UnhealthyContainersResponse) containerEndpoint.getUnhealthyContainers(
@@ -618,47 +657,75 @@
   }
 
   @Test
-  public void testGetReplicaHistoryForContainer() {
-    // Add container history for id 1
-    containerSchemaManager.upsertContainerHistory(1L, "host1", 1L);
-    containerSchemaManager.upsertContainerHistory(1L, "host2", 2L);
-    containerSchemaManager.upsertContainerHistory(1L, "host3", 3L);
-    containerSchemaManager.upsertContainerHistory(1L, "host4", 4L);
-    containerSchemaManager.upsertContainerHistory(1L, "host1", 5L);
+  public void testGetReplicaHistoryForContainer() throws IOException {
+    // Add container history for container id 1
+    final UUID u1 = newDatanode("host1", "127.0.0.1");
+    final UUID u2 = newDatanode("host2", "127.0.0.2");
+    final UUID u3 = newDatanode("host3", "127.0.0.3");
+    final UUID u4 = newDatanode("host4", "127.0.0.4");
+    reconContainerManager.upsertContainerHistory(1L, u1, 1L);
+    reconContainerManager.upsertContainerHistory(1L, u2, 2L);
+    reconContainerManager.upsertContainerHistory(1L, u3, 3L);
+    reconContainerManager.upsertContainerHistory(1L, u4, 4L);
+
+    reconContainerManager.upsertContainerHistory(1L, u1, 5L);
 
     Response response = containerEndpoint.getReplicaHistoryForContainer(1L);
     List<ContainerHistory> histories =
         (List<ContainerHistory>) response.getEntity();
     Set<String> datanodes = Collections.unmodifiableSet(
-        new HashSet<>(Arrays.asList("host1", "host2", "host3", "host4")));
+        new HashSet<>(Arrays.asList(
+            u1.toString(), u2.toString(), u3.toString(), u4.toString())));
     Assert.assertEquals(4, histories.size());
     histories.forEach(history -> {
-      Assert.assertTrue(datanodes.contains(history.getDatanodeHost()));
-      if (history.getDatanodeHost().equals("host1")) {
-        Assert.assertEquals(1L, (long) history.getFirstReportTimestamp());
-        Assert.assertEquals(5L, (long) history.getLastReportTimestamp());
+      Assert.assertTrue(datanodes.contains(history.getDatanodeUuid()));
+      if (history.getDatanodeUuid().equals(u1.toString())) {
+        Assert.assertEquals("host1", history.getDatanodeHost());
+        Assert.assertEquals(1L, history.getFirstSeenTime());
+        Assert.assertEquals(5L, history.getLastSeenTime());
       }
     });
+
+    // Check getLatestContainerHistory
+    List<ContainerHistory> hist1 = reconContainerManager
+        .getLatestContainerHistory(1L, 10);
+    Assert.assertTrue(hist1.size() <= 10);
+    // Descending order by last report timestamp
+    for (int i = 0; i < hist1.size() - 1; i++) {
+      Assert.assertTrue(hist1.get(i).getLastSeenTime()
+          >= hist1.get(i + 1).getLastSeenTime());
+    }
+  }
+
+  UUID newDatanode(String hostName, String ipAddress) throws IOException {
+    final UUID uuid = UUID.randomUUID();
+    reconContainerManager.getNodeDB().put(uuid,
+        DatanodeDetails.newBuilder()
+            .setUuid(uuid)
+            .setHostName(hostName)
+            .setIpAddress(ipAddress)
+            .build());
+    return uuid;
   }
 
   private void createUnhealthyRecords(int missing, int overRep, int underRep,
       int misRep) {
     int cid = 0;
-    for (int i=0; i<missing; i++) {
-      createUnhealthyRecord(++cid,
-          UnHealthyContainerStates.MISSING.toString(), 3, 0, 3, null);
+    for (int i = 0; i < missing; i++) {
+      createUnhealthyRecord(++cid, UnHealthyContainerStates.MISSING.toString(),
+          3, 0, 3, null);
     }
-    for (int i=0; i<overRep; i++) {
+    for (int i = 0; i < overRep; i++) {
       createUnhealthyRecord(++cid,
           UnHealthyContainerStates.OVER_REPLICATED.toString(),
           3, 5, -2, null);
     }
-    for (int i=0; i<underRep; i++) {
+    for (int i = 0; i < underRep; i++) {
       createUnhealthyRecord(++cid,
           UnHealthyContainerStates.UNDER_REPLICATED.toString(),
           3, 1, 2, null);
     }
-    for (int i=0; i<misRep; i++) {
+    for (int i = 0; i < misRep; i++) {
       createUnhealthyRecord(++cid,
           UnHealthyContainerStates.MIS_REPLICATED.toString(),
           2, 1, 1, "some reason");
@@ -677,14 +744,13 @@
     missing.setReplicaDelta(delta);
     missing.setReason(reason);
 
-    ArrayList<UnhealthyContainers> missingList =
-        new ArrayList<UnhealthyContainers>();
+    ArrayList<UnhealthyContainers> missingList = new ArrayList<>();
     missingList.add(missing);
-    containerSchemaManager.insertUnhealthyContainerRecords(missingList);
+    containerHealthSchemaManager.insertUnhealthyContainerRecords(missingList);
 
-    containerSchemaManager.upsertContainerHistory(cID, "host1", 1L);
-    containerSchemaManager.upsertContainerHistory(cID, "host2", 2L);
-    containerSchemaManager.upsertContainerHistory(cID, "host3", 3L);
-    containerSchemaManager.upsertContainerHistory(cID, "host4", 4L);
+    reconContainerManager.upsertContainerHistory(cID, uuid1, 1L);
+    reconContainerManager.upsertContainerHistory(cID, uuid2, 2L);
+    reconContainerManager.upsertContainerHistory(cID, uuid3, 3L);
+    reconContainerManager.upsertContainerHistory(cID, uuid4, 4L);
   }
-}
\ No newline at end of file
+}
diff --git a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
index acca61d..33d642e 100644
--- a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
+++ b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestEndpoints.java
@@ -60,7 +60,7 @@
 import org.apache.hadoop.ozone.recon.api.types.PipelineMetadata;
 import org.apache.hadoop.ozone.recon.api.types.PipelinesResponse;
 import org.apache.hadoop.ozone.recon.persistence.AbstractReconSqlDBTest;
-import org.apache.hadoop.ozone.recon.persistence.ContainerSchemaManager;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHealthSchemaManager;
 import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
 import org.apache.hadoop.ozone.recon.scm.ReconStorageContainerManagerFacade;
 import org.apache.hadoop.ozone.recon.spi.StorageContainerServiceProvider;
@@ -201,10 +201,11 @@
                 mockScmServiceProvider)
             .addBinding(OzoneStorageContainerManager.class,
                 ReconStorageContainerManagerFacade.class)
+            .withContainerDB()
             .addBinding(ClusterStateEndpoint.class)
             .addBinding(NodeEndpoint.class)
             .addBinding(MetricsServiceProviderFactory.class)
-            .addBinding(ContainerSchemaManager.class)
+            .addBinding(ContainerHealthSchemaManager.class)
             .addBinding(UtilizationEndpoint.class)
             .addBinding(ReconUtils.class, reconUtilsMock)
             .addBinding(StorageContainerLocationProtocol.class, mockScmClient)
diff --git a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/fsck/TestContainerHealthTask.java b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/fsck/TestContainerHealthTask.java
index d97b143..469ddf9 100644
--- a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/fsck/TestContainerHealthTask.java
+++ b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/fsck/TestContainerHealthTask.java
@@ -43,12 +43,11 @@
 import org.apache.hadoop.hdds.scm.container.ContainerManager;
 import org.apache.hadoop.hdds.scm.container.ContainerReplica;
 import org.apache.hadoop.hdds.scm.container.placement.algorithms.ContainerPlacementStatusDefault;
-import org.apache.hadoop.ozone.recon.persistence.ContainerSchemaManager;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHealthSchemaManager;
 import org.apache.hadoop.ozone.recon.scm.ReconStorageContainerManagerFacade;
 import org.apache.hadoop.ozone.recon.tasks.ReconTaskConfig;
 import org.apache.hadoop.test.LambdaTestUtils;
 import org.hadoop.ozone.recon.schema.ContainerSchemaDefinition;
-import org.hadoop.ozone.recon.schema.tables.daos.ContainerHistoryDao;
 import org.apache.hadoop.ozone.recon.persistence.AbstractReconSqlDBTest;
 import org.hadoop.ozone.recon.schema.tables.daos.ReconTaskStatusDao;
 import org.hadoop.ozone.recon.schema.tables.daos.UnhealthyContainersDao;
@@ -66,9 +65,8 @@
     UnhealthyContainersDao unHealthyContainersTableHandle =
         getDao(UnhealthyContainersDao.class);
 
-    ContainerSchemaManager containerSchemaManager =
-        new ContainerSchemaManager(
-            mock(ContainerHistoryDao.class),
+    ContainerHealthSchemaManager containerHealthSchemaManager =
+        new ContainerHealthSchemaManager(
             getSchemaDefinition(ContainerSchemaDefinition.class),
             unHealthyContainersTableHandle);
     ReconStorageContainerManagerFacade scmMock =
@@ -127,7 +125,7 @@
     reconTaskConfig.setMissingContainerTaskInterval(Duration.ofSeconds(2));
     ContainerHealthTask containerHealthTask =
         new ContainerHealthTask(scmMock.getContainerManager(),
-            reconTaskStatusDao, containerSchemaManager,
+            reconTaskStatusDao, containerHealthSchemaManager,
             placementMock, reconTaskConfig);
     containerHealthTask.start();
     LambdaTestUtils.await(6000, 1000, () ->
diff --git a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/scm/AbstractReconContainerManagerTest.java b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/scm/AbstractReconContainerManagerTest.java
index 365ab5f..9fb4cdd 100644
--- a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/scm/AbstractReconContainerManagerTest.java
+++ b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/scm/AbstractReconContainerManagerTest.java
@@ -36,7 +36,8 @@
 import org.apache.hadoop.hdds.utils.db.DBStore;
 import org.apache.hadoop.hdds.utils.db.DBStoreBuilder;
 import org.apache.hadoop.hdds.utils.db.Table;
-import org.apache.hadoop.ozone.recon.persistence.ContainerSchemaManager;
+import org.apache.hadoop.ozone.recon.persistence.ContainerHealthSchemaManager;
+import org.apache.hadoop.ozone.recon.spi.ContainerDBServiceProvider;
 import org.apache.hadoop.ozone.recon.spi.StorageContainerServiceProvider;
 
 import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState.OPEN;
@@ -96,7 +97,8 @@
         store,
         pipelineManager,
         getScmServiceProvider(),
-        mock(ContainerSchemaManager.class));
+        mock(ContainerHealthSchemaManager.class),
+        mock(ContainerDBServiceProvider.class));
   }
 
   @After
diff --git a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/scm/TestReconContainerManager.java b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/scm/TestReconContainerManager.java
index 9f47779..1fe32d1 100644
--- a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/scm/TestReconContainerManager.java
+++ b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/scm/TestReconContainerManager.java
@@ -28,14 +28,20 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.Map;
 import java.util.NavigableSet;
+import java.util.UUID;
 
 import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
 import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
 import org.apache.hadoop.hdds.scm.container.ContainerID;
 import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.ContainerReplica;
 import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.junit.Assert;
 import org.junit.Test;
 
 /**
@@ -139,4 +145,98 @@
     assertEquals(CLOSING,
         getContainerManager().getContainer(containerID).getState());
   }
-}
\ No newline at end of file
+
+  ContainerInfo newContainerInfo(long containerId) {
+    return new ContainerInfo.Builder()
+        .setContainerID(containerId)
+        .setReplicationType(HddsProtos.ReplicationType.RATIS)
+        .setState(HddsProtos.LifeCycleState.OPEN)
+        .setOwner("owner2")
+        .setNumberOfKeys(99L)
+        .setReplicationFactor(HddsProtos.ReplicationFactor.THREE)
+        .setPipelineID(PipelineID.randomId())
+        .build();
+  }
+
+  void putContainerInfos(ReconContainerManager containerManager, int num)
+      throws IOException {
+    for (int i = 1; i <= num; i++) {
+      final ContainerInfo info = newContainerInfo(i);
+      containerManager.getContainerStore().put(new ContainerID(i), info);
+      containerManager.getContainerStateManager()
+          .addContainerInfo(i, info, null, null);
+    }
+  }
+
+  @Test
+  public void testUpdateAndRemoveContainerReplica() throws IOException {
+    // Sanity checking updateContainerReplica and ContainerReplicaHistory
+
+    // Init Container 1
+    final long cIDlong1 = 1L;
+    final ContainerID containerID1 = new ContainerID(cIDlong1);
+
+    // Init DN01
+    final UUID uuid1 = UUID.randomUUID();
+    final DatanodeDetails datanodeDetails1 = DatanodeDetails.newBuilder()
+        .setUuid(uuid1).setHostName("host1").setIpAddress("127.0.0.1").build();
+    final ContainerReplica containerReplica1 = ContainerReplica.newBuilder()
+        .setContainerID(containerID1).setContainerState(State.OPEN)
+        .setDatanodeDetails(datanodeDetails1).build();
+
+    final ReconContainerManager containerManager = getContainerManager();
+    final Map<Long, Map<UUID, ContainerReplicaHistory>> repHistMap =
+        containerManager.getReplicaHistoryMap();
+    // Should be empty at the beginning
+    Assert.assertEquals(0, repHistMap.size());
+
+    // Put a replica info and call updateContainerReplica
+    putContainerInfos(containerManager, 10);
+    containerManager.updateContainerReplica(containerID1, containerReplica1);
+    // Should have 1 container entry in the replica history map
+    Assert.assertEquals(1, repHistMap.size());
+    // Should only have 1 entry for this replica (on DN01)
+    Assert.assertEquals(1, repHistMap.get(cIDlong1).size());
+    ContainerReplicaHistory repHist1 = repHistMap.get(cIDlong1).get(uuid1);
+    Assert.assertEquals(uuid1, repHist1.getUuid());
+    // Because this is a new entry, first seen time equals last seen time
+    assertEquals(repHist1.getLastSeenTime(), repHist1.getFirstSeenTime());
+
+    // Let's update the entry again
+    containerManager.updateContainerReplica(containerID1, containerReplica1);
+    // Should still have 1 entry in the replica history map
+    Assert.assertEquals(1, repHistMap.size());
+    // Now last seen time should be larger than first seen time
+    Assert.assertTrue(repHist1.getLastSeenTime() > repHist1.getFirstSeenTime());
+
+    // Init DN02
+    final UUID uuid2 = UUID.randomUUID();
+    final DatanodeDetails datanodeDetails2 = DatanodeDetails.newBuilder()
+        .setUuid(uuid2).setHostName("host2").setIpAddress("127.0.0.2").build();
+    final ContainerReplica containerReplica2 = ContainerReplica.newBuilder()
+        .setContainerID(containerID1).setContainerState(State.OPEN)
+        .setDatanodeDetails(datanodeDetails2).build();
+
+    // Add replica to DN02
+    containerManager.updateContainerReplica(containerID1, containerReplica2);
+
+    // Should still have 1 container entry in the replica history map
+    Assert.assertEquals(1, repHistMap.size());
+    // Should have 2 entries for this replica (on DN01 and DN02)
+    Assert.assertEquals(2, repHistMap.get(cIDlong1).size());
+    ContainerReplicaHistory repHist2 = repHistMap.get(cIDlong1).get(uuid2);
+    Assert.assertEquals(uuid2, repHist2.getUuid());
+    // Because this is a new entry, first seen time equals last seen time
+    assertEquals(repHist2.getLastSeenTime(), repHist2.getFirstSeenTime());
+
+    // Remove replica from DN01
+    containerManager.removeContainerReplica(containerID1, containerReplica1);
+    // Should still have 1 container entry in the replica history map
+    Assert.assertEquals(1, repHistMap.size());
+    // Should have 1 entry for this replica
+    Assert.assertEquals(1, repHistMap.get(cIDlong1).size());
+    // And the only entry should match DN02
+    Assert.assertEquals(uuid2,
+        repHistMap.get(cIDlong1).keySet().iterator().next());
+  }
+}
diff --git a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/DummyReconDBTask.java b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/DummyReconDBTask.java
index 0de9494..0ad53cf 100644
--- a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/DummyReconDBTask.java
+++ b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/DummyReconDBTask.java
@@ -49,7 +49,6 @@
     return taskName;
   }
 
-  @Override
   public Collection<String> getTaskTables() {
     return Collections.singletonList("volumeTable");
   }
diff --git a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestFileSizeCountTask.java b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestFileSizeCountTask.java
index 95aa52b..297a477 100644
--- a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestFileSizeCountTask.java
+++ b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestFileSizeCountTask.java
@@ -153,6 +153,7 @@
         .setAction(PUT)
         .setKey("deletedKey")
         .setValue(toBeDeletedKey)
+        .setTable(OmMetadataManagerImpl.KEY_TABLE)
         .build();
 
     OmKeyInfo toBeUpdatedKey = mock(OmKeyInfo.class);
@@ -164,6 +165,7 @@
         .setAction(PUT)
         .setKey("updatedKey")
         .setValue(toBeUpdatedKey)
+        .setTable(OmMetadataManagerImpl.KEY_TABLE)
         .build();
 
     OMUpdateEventBatch omUpdateEventBatch =
@@ -196,6 +198,7 @@
         .setAction(PUT)
         .setKey("newKey")
         .setValue(newKey)
+        .setTable(OmMetadataManagerImpl.KEY_TABLE)
         .build();
 
     // Update existing key.
@@ -209,6 +212,7 @@
         .setKey("updatedKey")
         .setValue(updatedKey)
         .setOldValue(toBeUpdatedKey)
+        .setTable(OmMetadataManagerImpl.KEY_TABLE)
         .build();
 
     // Delete another existing key.
@@ -216,6 +220,7 @@
         .setAction(DELETE)
         .setKey("deletedKey")
         .setValue(toBeDeletedKey)
+        .setTable(OmMetadataManagerImpl.KEY_TABLE)
         .build();
 
     omUpdateEventBatch = new OMUpdateEventBatch(
@@ -322,6 +327,7 @@
               .setAction(PUT)
               .setKey("key" + keyIndex)
               .setValue(omKeyInfo)
+              .setTable(OmMetadataManagerImpl.KEY_TABLE)
               .build());
         }
       }
@@ -365,6 +371,7 @@
                 .setAction(DELETE)
                 .setKey("key" + keyIndex)
                 .setValue(omKeyInfo)
+                .setTable(OmMetadataManagerImpl.KEY_TABLE)
                 .build());
           } else {
             // update all the files with keyIndex > 5 to filesize 1023L
@@ -374,6 +381,7 @@
                 .setAction(UPDATE)
                 .setKey("key" + keyIndex)
                 .setValue(omKeyInfo)
+                .setTable(OmMetadataManagerImpl.KEY_TABLE)
                 .setOldValue(
                     omKeyInfoList.get((volIndex * bktIndex) + keyIndex))
                 .build());
diff --git a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestReconTaskControllerImpl.java b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestReconTaskControllerImpl.java
index 7d1323b..a89a4ae 100644
--- a/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestReconTaskControllerImpl.java
+++ b/hadoop-ozone/recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestReconTaskControllerImpl.java
@@ -27,7 +27,6 @@
 import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
 
-import java.util.Collections;
 import java.util.HashSet;
 
 import org.apache.commons.lang3.tuple.ImmutablePair;
@@ -78,8 +77,6 @@
     OMUpdateEventBatch omUpdateEventBatchMock = mock(OMUpdateEventBatch.class);
     when(omUpdateEventBatchMock.getLastSequenceNumber()).thenReturn(100L);
     when(omUpdateEventBatchMock.isEmpty()).thenReturn(false);
-    when(omUpdateEventBatchMock.filter(Collections.singleton("MockTable")))
-        .thenReturn(omUpdateEventBatchMock);
 
     long startTime = System.currentTimeMillis();
     reconTaskController.consumeOMEvents(
@@ -205,11 +202,7 @@
    */
   private ReconOmTask getMockTask(String taskName) {
     ReconOmTask reconOmTaskMock = mock(ReconOmTask.class);
-    when(reconOmTaskMock.getTaskTables()).thenReturn(Collections
-        .EMPTY_LIST);
     when(reconOmTaskMock.getTaskName()).thenReturn(taskName);
-    when(reconOmTaskMock.getTaskTables())
-        .thenReturn(Collections.singleton("MockTable"));
     return reconOmTaskMock;
   }
 }
diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSSignatureProcessor.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSSignatureProcessor.java
index 4d45101..26c1a3e 100644
--- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSSignatureProcessor.java
+++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/AWSSignatureProcessor.java
@@ -28,6 +28,7 @@
 import java.net.URISyntaxException;
 import java.net.URLEncoder;
 import java.net.UnknownHostException;
+import java.nio.charset.StandardCharsets;
 import java.security.MessageDigest;
 import java.security.NoSuchAlgorithmException;
 import java.time.LocalDate;
@@ -54,6 +55,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+
 /**
  * Parser to process AWS V2 & V4 auth request. Creates string to sign and auth
  * header. For more details refer to AWS documentation https://docs.aws
@@ -309,7 +311,7 @@
   private String urlEncode(String str) {
     try {
 
-      return URLEncoder.encode(str, UTF_8.name())
+      return URLEncoder.encode(str, StandardCharsets.UTF_8.name())
           .replaceAll("\\+", "%20")
           .replaceAll("%7E", "~");
     } catch (UnsupportedEncodingException e) {
@@ -340,7 +342,7 @@
 
   public static String hash(String payload) throws NoSuchAlgorithmException {
     MessageDigest md = MessageDigest.getInstance("SHA-256");
-    md.update(payload.getBytes(UTF_8));
+    md.update(payload.getBytes(StandardCharsets.UTF_8));
     return Hex.encode(md.digest()).toLowerCase();
   }
 
diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
index 364d263..bee6e65 100644
--- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
+++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
@@ -25,7 +25,6 @@
 import java.net.URISyntaxException;
 import java.security.PrivilegedExceptionAction;
 
-import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.ozone.OzoneSecurityUtil;
@@ -36,8 +35,9 @@
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
 
+import com.google.common.annotations.VisibleForTesting;
+import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMTokenProto.Type.S3AUTHINFO;
-import static org.apache.hadoop.ozone.s3.SignatureProcessor.UTF_8;
 import static org.apache.hadoop.ozone.s3.exception.S3ErrorTable.INTERNAL_ERROR;
 import static org.apache.hadoop.ozone.s3.exception.S3ErrorTable.MALFORMED_HEADER;
 import static org.apache.hadoop.ozone.s3.exception.S3ErrorTable.S3_AUTHINFO_CREATION_ERROR;
diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/SignatureProcessor.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/SignatureProcessor.java
index e3cb6af..5e2e3fb 100644
--- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/SignatureProcessor.java
+++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/SignatureProcessor.java
@@ -17,7 +17,6 @@
  */
 package org.apache.hadoop.ozone.s3;
 
-import java.nio.charset.Charset;
 import java.time.ZoneOffset;
 import java.time.format.DateTimeFormatter;
 
@@ -32,7 +31,6 @@
   String X_AMAZ_DATE = "X-Amz-Date";
   String CONTENT_MD5 = "content-md5";
   String AUTHORIZATION_HEADER = "Authorization";
-  Charset UTF_8 = Charset.forName("utf-8");
   String X_AMZ_CONTENT_SHA256 = "X-Amz-Content-SHA256";
   String HOST = "host";
 
diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
index 789bb45..b8bed64 100644
--- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
+++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
@@ -116,19 +116,26 @@
     if (startAfter == null && marker != null) {
       startAfter = marker;
     }
-    if (startAfter != null && continueToken != null) {
-      // If continuation token and start after both are provided, then we
-      // ignore start After
-      ozoneKeyIterator = bucket.listKeys(prefix, decodedToken.getLastKey());
-    } else if (startAfter != null && continueToken == null) {
-      ozoneKeyIterator = bucket.listKeys(prefix, startAfter);
-    } else if (startAfter == null && continueToken != null){
-      ozoneKeyIterator = bucket.listKeys(prefix, decodedToken.getLastKey());
-    } else {
-      ozoneKeyIterator = bucket.listKeys(prefix);
+    try {
+      if (startAfter != null && continueToken != null) {
+        // If continuation token and start after both are provided, then we
+        // ignore start After
+        ozoneKeyIterator = bucket.listKeys(prefix, decodedToken.getLastKey());
+      } else if (startAfter != null && continueToken == null) {
+        ozoneKeyIterator = bucket.listKeys(prefix, startAfter);
+      } else if (startAfter == null && continueToken != null) {
+        ozoneKeyIterator = bucket.listKeys(prefix, decodedToken.getLastKey());
+      } else {
+        ozoneKeyIterator = bucket.listKeys(prefix);
+      }
+    } catch (OMException ex) {
+      if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED, bucketName);
+      } else {
+        throw ex;
+      }
     }
 
-
     ListObjectResponse response = new ListObjectResponse();
     response.setDelimiter(delimiter);
     response.setName(bucketName);
@@ -229,8 +236,16 @@
 
     OzoneBucket bucket = getBucket(bucketName);
 
-    OzoneMultipartUploadList ozoneMultipartUploadList =
-        bucket.listMultipartUploads(prefix);
+    OzoneMultipartUploadList ozoneMultipartUploadList;
+    try {
+      ozoneMultipartUploadList = bucket.listMultipartUploads(prefix);
+    } catch (OMException exception) {
+      if (exception.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED,
+            prefix);
+      }
+      throw exception;
+    }
 
     ListMultipartUploadsResult result = new ListMultipartUploadsResult();
     result.setBucket(bucketName);
@@ -282,6 +297,8 @@
       } else if (ex.getResult() == ResultCodes.BUCKET_NOT_FOUND) {
         throw S3ErrorTable.newError(S3ErrorTable
             .NO_SUCH_BUCKET, bucketName);
+      } else if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED, bucketName);
       } else {
         throw ex;
       }
@@ -315,7 +332,11 @@
             result.addDeleted(new DeletedObject(keyToDelete.getKey()));
           }
         } catch (OMException ex) {
-          if (ex.getResult() != ResultCodes.KEY_NOT_FOUND) {
+          if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+            result.addError(
+                new Error(keyToDelete.getKey(), "PermissionDenied",
+                    ex.getMessage()));
+          } else if (ex.getResult() != ResultCodes.KEY_NOT_FOUND) {
             result.addError(
                 new Error(keyToDelete.getKey(), "InternalError",
                     ex.getMessage()));
diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/EndpointBase.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/EndpointBase.java
index b60519d..360f4f4 100644
--- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/EndpointBase.java
+++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/EndpointBase.java
@@ -70,6 +70,8 @@
       if (ex.getResult() == ResultCodes.BUCKET_NOT_FOUND
           || ex.getResult() == ResultCodes.VOLUME_NOT_FOUND) {
         throw S3ErrorTable.newError(S3ErrorTable.NO_SUCH_BUCKET, bucketName);
+      } else if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED, bucketName);
       } else {
         throw ex;
       }
@@ -91,11 +93,13 @@
    * @throws IOException
    */
   protected String createS3Bucket(String bucketName) throws
-      IOException {
+      IOException, OS3Exception {
     try {
       client.getObjectStore().createS3Bucket(bucketName);
     } catch (OMException ex) {
-      if (ex.getResult() != ResultCodes.BUCKET_ALREADY_EXISTS) {
+      if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED, bucketName);
+      } else if (ex.getResult() != ResultCodes.BUCKET_ALREADY_EXISTS) {
         // S3 does not return error for bucket already exists, it just
         // returns the location.
         throw ex;
@@ -110,8 +114,16 @@
    * @throws  IOException in case the bucket cannot be deleted.
    */
   public void deleteS3Bucket(String s3BucketName)
-      throws IOException {
-    client.getObjectStore().deleteS3Bucket(s3BucketName);
+      throws IOException, OS3Exception {
+    try {
+      client.getObjectStore().deleteS3Bucket(s3BucketName);
+    } catch (OMException ex) {
+      if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED,
+            s3BucketName);
+      }
+      throw ex;
+    }
   }
 
   /**
@@ -123,7 +135,7 @@
    * @return {@code Iterator<OzoneBucket>}
    */
   public Iterator<? extends OzoneBucket> listS3Buckets(String prefix)
-      throws IOException {
+      throws IOException, OS3Exception {
     return iterateBuckets(volume -> volume.listBuckets(prefix));
   }
 
@@ -138,18 +150,21 @@
    * @return {@code Iterator<OzoneBucket>}
    */
   public Iterator<? extends OzoneBucket> listS3Buckets(String prefix,
-      String previousBucket) throws IOException {
+      String previousBucket) throws IOException, OS3Exception {
     return iterateBuckets(volume -> volume.listBuckets(prefix, previousBucket));
   }
 
   private Iterator<? extends OzoneBucket> iterateBuckets(
       Function<OzoneVolume, Iterator<? extends OzoneBucket>> query)
-      throws IOException {
+      throws IOException, OS3Exception{
     try {
       return query.apply(getVolume());
     } catch (OMException e) {
       if (e.getResult() == ResultCodes.VOLUME_NOT_FOUND) {
         return Collections.emptyIterator();
+      } else  if (e.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED,
+            "listBuckets");
       } else {
         throw e;
       }
diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
index 527f774..6b4efb7 100644
--- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
+++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
@@ -209,6 +209,9 @@
               " considered as Unix Paths. Path has Violated FS Semantics " +
               "which caused put operation to fail.");
           throw os3Exception;
+        } else if ((((OMException) ex).getResult() ==
+            ResultCodes.PERMISSION_DENIED)) {
+          throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED, keyPath);
         }
       }
       throw ex;
@@ -320,6 +323,8 @@
       if (ex.getResult() == ResultCodes.KEY_NOT_FOUND) {
         throw S3ErrorTable.newError(S3ErrorTable
             .NO_SUCH_KEY, keyPath);
+      } else if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED, keyPath);
       } else {
         throw ex;
       }
@@ -357,6 +362,8 @@
       if (ex.getResult() == ResultCodes.KEY_NOT_FOUND) {
         // Just return 404 with no content
         return Response.status(Status.NOT_FOUND).build();
+      } else if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED, keyPath);
       } else {
         throw ex;
       }
@@ -426,6 +433,8 @@
       } else if (ex.getResult() == ResultCodes.KEY_NOT_FOUND) {
         //NOT_FOUND is not a problem, AWS doesn't throw exception for missing
         // keys. Just return 204
+      } else if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED, keyPath);
       } else {
         throw ex;
       }
@@ -474,9 +483,12 @@
 
       return Response.status(Status.OK).entity(
           multipartUploadInitiateResponse).build();
-    } catch (IOException ex) {
+    } catch (OMException ex) {
       LOG.error("Error in Initiate Multipart Upload Request for bucket: {}, " +
           "key: {}", bucket, key, ex);
+      if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED, key);
+      }
       throw ex;
     }
   }
@@ -619,6 +631,9 @@
       if (ex.getResult() == ResultCodes.NO_SUCH_MULTIPART_UPLOAD_ERROR) {
         throw S3ErrorTable.newError(NO_SUCH_UPLOAD,
             uploadID);
+      } else if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED,
+            bucket + "/" + key);
       }
       throw ex;
     }
@@ -675,6 +690,9 @@
       if (ex.getResult() == ResultCodes.NO_SUCH_MULTIPART_UPLOAD_ERROR) {
         throw S3ErrorTable.newError(NO_SUCH_UPLOAD,
             uploadID);
+      } else if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED,
+            bucket + "/" + key + "/" + uploadID);
       }
       throw ex;
     }
@@ -760,6 +778,9 @@
         throw S3ErrorTable.newError(S3ErrorTable.NO_SUCH_KEY, sourceKey);
       } else if (ex.getResult() == ResultCodes.BUCKET_NOT_FOUND) {
         throw S3ErrorTable.newError(S3ErrorTable.NO_SUCH_BUCKET, sourceBucket);
+      } else if (ex.getResult() == ResultCodes.PERMISSION_DENIED) {
+        throw S3ErrorTable.newError(S3ErrorTable.ACCESS_DENIED,
+            destBucket + "/" + destkey);
       }
       throw ex;
     } finally {
diff --git a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/S3ErrorTable.java b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/S3ErrorTable.java
index 432b582..a2c9f17 100644
--- a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/S3ErrorTable.java
+++ b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/exception/S3ErrorTable.java
@@ -22,6 +22,7 @@
 
 import static java.net.HttpURLConnection.HTTP_BAD_REQUEST;
 import static java.net.HttpURLConnection.HTTP_CONFLICT;
+import static java.net.HttpURLConnection.HTTP_FORBIDDEN;
 import static java.net.HttpURLConnection.HTTP_NOT_FOUND;
 import static java.net.HttpURLConnection.HTTP_SERVER_ERROR;
 import static org.apache.hadoop.ozone.s3.util.S3Consts.RANGE_NOT_SATISFIABLE;
@@ -105,6 +106,9 @@
       "InternalError", "We encountered an internal error. Please try again.",
       HTTP_SERVER_ERROR);
 
+  public static final OS3Exception ACCESS_DENIED = new OS3Exception(
+      "AccessDenied", "User doesn't have the right to access this " +
+      "resource.", HTTP_FORBIDDEN);
 
   /**
    * Create a new instance of Error.
diff --git a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/ObjectStoreStub.java b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/ObjectStoreStub.java
index 1dfc962..74ebf4e 100644
--- a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/ObjectStoreStub.java
+++ b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/ObjectStoreStub.java
@@ -66,7 +66,7 @@
             volumeArgs.getAdmin(),
             volumeArgs.getOwner(),
             volumeArgs.getQuotaInBytes(),
-            volumeArgs.getQuotaInCounts(),
+            volumeArgs.getQuotaInNamespace(),
             Time.now(),
             volumeArgs.getAcls());
     volumes.put(volumeName, volume);
diff --git a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneVolumeStub.java b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneVolumeStub.java
index 53a5d81..f5e0603 100644
--- a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneVolumeStub.java
+++ b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/client/OzoneVolumeStub.java
@@ -39,9 +39,10 @@
   private Map<String, OzoneBucketStub> buckets = new HashMap<>();
 
   public OzoneVolumeStub(String name, String admin, String owner,
-      long quotaInBytes, long quotaInCounts, long creationTime,
+      long quotaInBytes, long quotaInNamespace, long creationTime,
       List<OzoneAcl> acls) {
-    super(name, admin, owner, quotaInBytes, quotaInCounts, creationTime, acls);
+    super(name, admin, owner, quotaInBytes, quotaInNamespace, creationTime,
+        acls);
   }
 
   @Override
diff --git a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/TestSignedChunksInputStream.java b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/TestSignedChunksInputStream.java
index 3599c05..8dcfe59 100644
--- a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/TestSignedChunksInputStream.java
+++ b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/TestSignedChunksInputStream.java
@@ -20,7 +20,7 @@
 import java.io.ByteArrayInputStream;
 import java.io.IOException;
 import java.io.InputStream;
-import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 
 import org.apache.commons.io.IOUtils;
 import org.junit.Assert;
@@ -36,14 +36,14 @@
     InputStream is = fileContent("0;chunk-signature"
         +
         "=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40");
-    String result = IOUtils.toString(is, Charset.forName("UTF-8"));
+    String result = IOUtils.toString(is, StandardCharsets.UTF_8);
     Assert.assertEquals("", result);
 
     is = fileContent("0;chunk-signature"
         +
         "=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40\r"
         + "\n");
-    result = IOUtils.toString(is, Charset.forName("UTF-8"));
+    result = IOUtils.toString(is, StandardCharsets.UTF_8);
     Assert.assertEquals("", result);
   }
 
@@ -54,7 +54,7 @@
         +
         "=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40\r"
         + "\n1234567890\r\n");
-    String result = IOUtils.toString(is, Charset.forName("UTF-8"));
+    String result = IOUtils.toString(is, StandardCharsets.UTF_8);
     Assert.assertEquals("1234567890", result);
 
     //test read(byte[],int,int)
@@ -74,7 +74,7 @@
         +
         "=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40\r"
         + "\n1234567890");
-    String result = IOUtils.toString(is, Charset.forName("UTF-8"));
+    String result = IOUtils.toString(is, StandardCharsets.UTF_8);
     Assert.assertEquals("1234567890", result);
 
     //test read(byte[],int,int)
@@ -94,7 +94,7 @@
         + "1234567890\r\n"
         + "05;chunk-signature=signature\r\n"
         + "abcde\r\n");
-    String result = IOUtils.toString(is, Charset.forName("UTF-8"));
+    String result = IOUtils.toString(is, StandardCharsets.UTF_8);
     Assert.assertEquals("1234567890abcde", result);
 
     //test read(byte[],int,int)
diff --git a/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestPermissionCheck.java b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestPermissionCheck.java
new file mode 100644
index 0000000..1c5622e
--- /dev/null
+++ b/hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/endpoint/TestPermissionCheck.java
@@ -0,0 +1,268 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ */
+package org.apache.hadoop.ozone.s3.endpoint;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.s3.exception.OS3Exception;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import javax.ws.rs.core.HttpHeaders;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static java.net.HttpURLConnection.HTTP_FORBIDDEN;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.anyLong;
+import static org.mockito.ArgumentMatchers.anyString;
+import static org.mockito.Mockito.doThrow;
+
+/**
+ * Test operation permission check result.
+ */
+public class TestPermissionCheck {
+  private OzoneConfiguration conf;
+  private OzoneClient client;
+  private ObjectStore objectStore;
+  private OzoneBucket bucket;
+  private OzoneVolume volume;
+  private OMException exception;
+  private HttpHeaders headers;
+
+  @Before
+  public void setup() {
+    conf = new OzoneConfiguration();
+    conf.set(OzoneConfigKeys.OZONE_S3_VOLUME_NAME,
+        OzoneConfigKeys.OZONE_S3_VOLUME_NAME_DEFAULT);
+    client = Mockito.mock(OzoneClient.class);
+    objectStore = Mockito.mock(ObjectStore.class);
+    bucket = Mockito.mock(OzoneBucket.class);
+    volume = Mockito.mock(OzoneVolume.class);
+    exception = new OMException("Permission Denied",
+        OMException.ResultCodes.PERMISSION_DENIED);
+    Mockito.when(client.getObjectStore()).thenReturn(objectStore);
+    Mockito.when(client.getConfiguration()).thenReturn(conf);
+    headers = Mockito.mock(HttpHeaders.class);
+  }
+
+  /**
+   *  Root Endpoint.
+   */
+  @Test
+  public void testListS3Buckets() throws IOException {
+    doThrow(exception).when(objectStore).getVolume(anyString());
+    RootEndpoint rootEndpoint = new RootEndpoint();
+    rootEndpoint.setClient(client);
+
+    try {
+      rootEndpoint.get();
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+
+  /**
+   *  Bucket Endpoint.
+   */
+  @Test
+  public void testGetBucket() throws IOException {
+    doThrow(exception).when(objectStore).getS3Bucket(anyString());
+    BucketEndpoint bucketEndpoint = new BucketEndpoint();
+    bucketEndpoint.setClient(client);
+
+    try {
+      bucketEndpoint.head("bucketName");
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+
+  @Test
+  public void testCreateBucket() throws IOException {
+    Mockito.when(objectStore.getVolume(anyString())).thenReturn(volume);
+    doThrow(exception).when(objectStore).createS3Bucket(anyString());
+    BucketEndpoint bucketEndpoint = new BucketEndpoint();
+    bucketEndpoint.setClient(client);
+
+    try {
+      bucketEndpoint.put("bucketName", null);
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+
+  @Test
+  public void testDeleteBucket() throws IOException {
+    doThrow(exception).when(objectStore).deleteS3Bucket(anyString());
+    BucketEndpoint bucketEndpoint = new BucketEndpoint();
+    bucketEndpoint.setClient(client);
+
+    try {
+      bucketEndpoint.delete("bucketName");
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+  @Test
+  public void testListMultiUpload() throws IOException {
+    Mockito.when(objectStore.getS3Bucket(anyString())).thenReturn(bucket);
+    doThrow(exception).when(bucket).listMultipartUploads(anyString());
+    BucketEndpoint bucketEndpoint = new BucketEndpoint();
+    bucketEndpoint.setClient(client);
+
+    try {
+      bucketEndpoint.listMultipartUploads("bucketName", "prefix");
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+
+  @Test
+  public void testListKey() throws IOException {
+    Mockito.when(objectStore.getVolume(anyString())).thenReturn(volume);
+    Mockito.when(objectStore.getS3Bucket(anyString())).thenReturn(bucket);
+    doThrow(exception).when(bucket).listKeys(anyString());
+    BucketEndpoint bucketEndpoint = new BucketEndpoint();
+    bucketEndpoint.setClient(client);
+
+    try {
+      bucketEndpoint.list("bucketName", null, null, null, 1000,
+          null, null, null, null, null, null);
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+
+  @Test
+  public void testDeleteKeys() throws IOException, OS3Exception {
+    Mockito.when(objectStore.getVolume(anyString())).thenReturn(volume);
+    Mockito.when(objectStore.getS3Bucket(anyString())).thenReturn(bucket);
+    doThrow(exception).when(bucket).deleteKey(any());
+    BucketEndpoint bucketEndpoint = new BucketEndpoint();
+    bucketEndpoint.setClient(client);
+    MultiDeleteRequest request = new MultiDeleteRequest();
+    List<MultiDeleteRequest.DeleteObject> objectList = new ArrayList<>();
+    objectList.add(new MultiDeleteRequest.DeleteObject("deleteKeyName"));
+    request.setQuiet(false);
+    request.setObjects(objectList);
+
+    MultiDeleteResponse response =
+        bucketEndpoint.multiDelete("BucketName", "keyName", request);
+    Assert.assertTrue(response.getErrors().size() == 1);
+    Assert.assertTrue(
+        response.getErrors().get(0).getCode().equals("PermissionDenied"));
+  }
+
+  /**
+   *  Object Endpoint.
+   */
+  @Test
+  public void testGetKey() throws IOException {
+    Mockito.when(objectStore.getS3Bucket(anyString())).thenReturn(bucket);
+    doThrow(exception).when(bucket).getKey(anyString());
+    ObjectEndpoint objectEndpoint = new ObjectEndpoint();
+    objectEndpoint.setClient(client);
+    objectEndpoint.setHeaders(headers);
+
+    try {
+      objectEndpoint.get("bucketName", "keyPath", null, 1000, "marker",
+          null);
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      e.printStackTrace();
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+
+  @Test
+  public void testPutKey() throws IOException {
+    Mockito.when(objectStore.getS3Bucket(anyString())).thenReturn(bucket);
+    doThrow(exception).when(bucket)
+        .createKey(anyString(), anyLong(), any(), any(), any());
+    ObjectEndpoint objectEndpoint = new ObjectEndpoint();
+    objectEndpoint.setClient(client);
+    objectEndpoint.setHeaders(headers);
+
+    try {
+      objectEndpoint.put("bucketName", "keyPath", 1024, 0, null, null);
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+
+  @Test
+  public void testDeleteKey() throws IOException {
+    Mockito.when(objectStore.getS3Bucket(anyString())).thenReturn(bucket);
+    doThrow(exception).when(bucket).deleteKey(anyString());
+    ObjectEndpoint objectEndpoint = new ObjectEndpoint();
+    objectEndpoint.setClient(client);
+    objectEndpoint.setHeaders(headers);
+
+    try {
+      objectEndpoint.delete("bucketName", "keyPath", null);
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+
+  @Test
+  public void testMultiUploadKey() throws IOException {
+    Mockito.when(objectStore.getS3Bucket(anyString())).thenReturn(bucket);
+    doThrow(exception).when(bucket)
+        .initiateMultipartUpload(anyString(), any(), any());
+    ObjectEndpoint objectEndpoint = new ObjectEndpoint();
+    objectEndpoint.setClient(client);
+    objectEndpoint.setHeaders(headers);
+
+    try {
+      objectEndpoint.initializeMultipartUpload("bucketName", "keyPath");
+      Assert.fail("Should fail");
+    } catch (Exception e) {
+      Assert.assertTrue(e instanceof OS3Exception);
+      Assert.assertTrue(((OS3Exception) e).getHttpCode() == HTTP_FORBIDDEN);
+    }
+  }
+}
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DBScanner.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DBScanner.java
index 1ceab42..d140f80 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DBScanner.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/DBScanner.java
@@ -142,7 +142,9 @@
     this.columnFamilyMap = new HashMap<>();
     DBColumnFamilyDefinition[] columnFamilyDefinitions = dbDefinition
             .getColumnFamilies();
-    for(DBColumnFamilyDefinition definition:columnFamilyDefinitions){
+    for (DBColumnFamilyDefinition definition:columnFamilyDefinitions) {
+      System.out.println("Added definition for table:" +
+          definition.getTableName());
       this.columnFamilyMap.put(definition.getTableName(), definition);
     }
   }
@@ -173,7 +175,7 @@
             getDefinition(new File(dbPath).getName()));
     if (this.columnFamilyMap !=null) {
       if (!this.columnFamilyMap.containsKey(tableName)) {
-        System.out.print("Table with specified name does not exist");
+        System.out.print("Table with name:" + tableName + " does not exist");
       } else {
         DBColumnFamilyDefinition columnFamilyDefinition =
                 this.columnFamilyMap.get(tableName);
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
index 1cfff12..65096a6 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
@@ -162,11 +162,20 @@
 
       //in case of an other failed test, we shouldn't execute more tasks.
       if (counter >= testNo || (!failAtEnd && failureCounter.get() > 0)) {
-        return;
+        break;
       }
 
       tryNextTask(provider, counter);
     }
+
+    taskLoopCompleted();
+  }
+
+  /**
+   * Provides a way to clean up per-thread resources.
+   */
+  protected void taskLoopCompleted() {
+    // no-op
   }
 
   /**
@@ -482,4 +491,12 @@
       return OzoneClientFactory.getRpcClient(conf);
     }
   }
+
+  public void setTestNo(long testNo) {
+    this.testNo = testNo;
+  }
+
+  public void setThreadNo(int threadNo) {
+    this.threadNo = threadNo;
+  }
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/ClosedContainerReplicator.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/ClosedContainerReplicator.java
new file mode 100644
index 0000000..ad2810a
--- /dev/null
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/ClosedContainerReplicator.java
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.Callable;
+import java.util.stream.Collectors;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.ConfigurationSource;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.scm.cli.ContainerOperationClient;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
+import org.apache.hadoop.ozone.container.common.helpers.ContainerMetrics;
+import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
+import org.apache.hadoop.ozone.container.common.interfaces.Handler;
+import org.apache.hadoop.ozone.container.common.volume.MutableVolumeSet;
+import org.apache.hadoop.ozone.container.keyvalue.TarContainerPacker;
+import org.apache.hadoop.ozone.container.ozoneimpl.ContainerController;
+import org.apache.hadoop.ozone.container.replication.ContainerReplicator;
+import org.apache.hadoop.ozone.container.replication.DownloadAndImportReplicator;
+import org.apache.hadoop.ozone.container.replication.ReplicationSupervisor;
+import org.apache.hadoop.ozone.container.replication.ReplicationTask;
+import org.apache.hadoop.ozone.container.replication.SimpleContainerDownloader;
+
+import com.codahale.metrics.Timer;
+import org.jetbrains.annotations.NotNull;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Utility to replicated closed container with datanode code.
+ */
+@Command(name = "cr",
+    aliases = "container-replicator",
+    description = "Replicate / download closed containers.",
+    versionProvider = HddsVersionProvider.class,
+    mixinStandardHelpOptions = true,
+    showDefaultValues = true)
+public class ClosedContainerReplicator extends BaseFreonGenerator implements
+    Callable<Void> {
+
+  @Option(names = {"--datanode"},
+      description = "Replicate only containers on this specific datanode.",
+      defaultValue = "")
+  private String datanode;
+
+  private ReplicationSupervisor supervisor;
+
+  private Timer timer;
+
+  private List<ReplicationTask> replicationTasks;
+
+  @Override
+  public Void call() throws Exception {
+
+    OzoneConfiguration conf = createOzoneConfiguration();
+
+    final Collection<String> datanodeStorageDirs =
+        MutableVolumeSet.getDatanodeStorageDirs(conf);
+
+    for (String dir : datanodeStorageDirs) {
+      checkDestinationDirectory(dir);
+    }
+
+    //logic same as the download+import on the destination datanode
+    initializeReplicationSupervisor(conf);
+
+    final ContainerOperationClient containerOperationClient =
+        new ContainerOperationClient(conf);
+
+    final List<ContainerInfo> containerInfos =
+        containerOperationClient.listContainer(0L, 1_000_000);
+
+    replicationTasks = new ArrayList<>();
+
+    for (ContainerInfo container : containerInfos) {
+
+      final ContainerWithPipeline containerWithPipeline =
+          containerOperationClient
+              .getContainerWithPipeline(container.getContainerID());
+
+      if (container.getState() == LifeCycleState.CLOSED) {
+
+        final List<DatanodeDetails> datanodesWithContainer =
+            containerWithPipeline.getPipeline().getNodes();
+
+        final List<String> datanodeUUIDs =
+            datanodesWithContainer
+                .stream().map(DatanodeDetails::getUuidString)
+                .collect(Collectors.toList());
+
+        //if datanode is specified, replicate only container if it has a
+        //replica.
+        if (datanode.isEmpty() || datanodeUUIDs.contains(datanode)) {
+          replicationTasks.add(new ReplicationTask(container.getContainerID(),
+              datanodesWithContainer));
+        }
+      }
+
+    }
+
+    //important: override the max number of tasks.
+    setTestNo(replicationTasks.size());
+
+    init();
+
+    timer = getMetrics().timer("replicate-container");
+    runTests(this::replicateContainer);
+    return null;
+  }
+
+  /**
+   * Check id target directory is not re-used.
+   */
+  private void checkDestinationDirectory(String dirUrl) throws IOException {
+    final StorageLocation storageLocation = StorageLocation.parse(dirUrl);
+    final Path dirPath = Paths.get(storageLocation.getUri().getPath());
+
+    if (Files.notExists(dirPath)) {
+      return;
+    }
+
+    if (Files.list(dirPath).count() == 0) {
+      return;
+    }
+
+    throw new IllegalArgumentException(
+        "Configured storage directory " + dirUrl
+            + " (used as destination) should be empty");
+  }
+
+  @NotNull
+  private void initializeReplicationSupervisor(ConfigurationSource conf)
+      throws IOException {
+    String fakeDatanodeUuid = datanode;
+
+    if (fakeDatanodeUuid.isEmpty()) {
+      fakeDatanodeUuid = UUID.randomUUID().toString();
+    }
+
+    ContainerSet containerSet = new ContainerSet();
+
+    ContainerMetrics metrics = ContainerMetrics.create(conf);
+
+    MutableVolumeSet volumeSet = new MutableVolumeSet(fakeDatanodeUuid, conf);
+
+    Map<ContainerType, Handler> handlers = new HashMap<>();
+
+    for (ContainerType containerType : ContainerType.values()) {
+      final Handler handler =
+          Handler.getHandlerForContainerType(
+              containerType,
+              conf,
+              fakeDatanodeUuid,
+              containerSet,
+              volumeSet,
+              metrics,
+              containerReplicaProto -> {
+              });
+      handler.setScmID(UUID.randomUUID().toString());
+      handlers.put(containerType, handler);
+    }
+
+    ContainerController controller =
+        new ContainerController(containerSet, handlers);
+
+    ContainerReplicator replicator =
+        new DownloadAndImportReplicator(containerSet,
+            controller,
+            new SimpleContainerDownloader(conf, null),
+            new TarContainerPacker());
+
+    supervisor = new ReplicationSupervisor(containerSet, replicator, 10);
+  }
+
+  private void replicateContainer(long counter) throws Exception {
+    timer.time(() -> {
+      final ReplicationTask replicationTask =
+          replicationTasks.get((int) counter);
+      supervisor.new TaskRunner(replicationTask).run();
+      return null;
+    });
+  }
+}
\ No newline at end of file
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkGenerator.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkGenerator.java
index a6e2832..c0c58d0 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkGenerator.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/DatanodeChunkGenerator.java
@@ -16,12 +16,19 @@
  */
 package org.apache.hadoop.ozone.freon;
 
+import java.io.IOException;
 import java.nio.charset.StandardCharsets;
+import java.util.Set;
 import java.util.List;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Arrays;
 import java.util.concurrent.Callable;
+import java.util.stream.Collectors;
 
 import org.apache.hadoop.hdds.cli.HddsVersionProvider;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumData;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChecksumType;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
@@ -75,72 +82,121 @@
       description = "Pipeline to use. By default the first RATIS/THREE "
           + "pipeline will be used.",
       defaultValue = "")
-  private String pipelineId;
+  private String pipelineIds;
 
-  private XceiverClientSpi xceiverClientSpi;
+  @Option(names = {"-d", "--datanodes"},
+      description = "Datanodes to use." +
+          " Test will write to all the existing pipelines " +
+          "which this datanode is member of.",
+      defaultValue = "")
+  private String datanodes;
+
+  private XceiverClientManager xceiverClientManager;
+  private List<XceiverClientSpi> xceiverClients;
 
   private Timer timer;
 
   private ByteString dataToWrite;
   private ChecksumData checksumProtobuf;
 
+
   @Override
   public Void call() throws Exception {
 
-    init();
 
     OzoneConfiguration ozoneConf = createOzoneConfiguration();
+    xceiverClientManager =
+        new XceiverClientManager(ozoneConf);
     if (OzoneSecurityUtil.isSecurityEnabled(ozoneConf)) {
       throw new IllegalArgumentException(
           "Datanode chunk generator is not supported in secure environment");
     }
 
+    List<String> pipelinesFromCmd = Arrays.asList(pipelineIds.split(","));
+
+    List<String> datanodeHosts = Arrays.asList(this.datanodes.split(","));
+
+    Set<Pipeline> pipelines;
+
     try (StorageContainerLocationProtocol scmLocationClient =
-        createStorageContainerLocationClient(ozoneConf)) {
-      List<Pipeline> pipelines = scmLocationClient.listPipelines();
-      Pipeline pipeline;
-      if (pipelineId != null && pipelineId.length() > 0) {
-        pipeline = pipelines.stream()
-            .filter(p -> p.getId().toString().equals(pipelineId))
-            .findFirst()
-            .orElseThrow(() -> new IllegalArgumentException(
-                "Pipeline ID is defined, but there is no such pipeline: "
-                    + pipelineId));
-
+               createStorageContainerLocationClient(ozoneConf)) {
+      List<Pipeline> pipelinesFromSCM = scmLocationClient.listPipelines();
+      Pipeline firstPipeline;
+      init();
+      if (!arePipelinesOrDatanodesProvided()) {
+        //default behaviour if no arguments provided
+        firstPipeline = pipelinesFromSCM.stream()
+              .filter(p -> p.getFactor() == ReplicationFactor.THREE)
+              .findFirst()
+              .orElseThrow(() -> new IllegalArgumentException(
+                  "Pipeline ID is NOT defined, and no pipeline " +
+                      "has been found with factor=THREE"));
+        XceiverClientSpi xceiverClientSpi = xceiverClientManager
+            .acquireClient(firstPipeline);
+        xceiverClients = new ArrayList<>();
+        xceiverClients.add(xceiverClientSpi);
       } else {
-        pipeline = pipelines.stream()
-            .filter(p -> p.getFactor() == ReplicationFactor.THREE)
-            .findFirst()
-            .orElseThrow(() -> new IllegalArgumentException(
-                "Pipeline ID is NOT defined, and no pipeline " +
-                    "has been found with factor=THREE"));
-        LOG.info("Using pipeline {}", pipeline.getId());
+        xceiverClients = new ArrayList<>();
+        pipelines = new HashSet<>();
+        for(String pipelineId:pipelinesFromCmd){
+          List<Pipeline> selectedPipelines =  pipelinesFromSCM.stream()
+              .filter((p -> p.getId().toString()
+                  .equals("PipelineID=" + pipelineId)
+                  || pipelineContainsDatanode(p, datanodeHosts)))
+               .collect(Collectors.toList());
+          pipelines.addAll(selectedPipelines);
+        }
+        for (Pipeline p:pipelines){
+          LOG.info("Writing to pipeline: " + p.getId());
+          xceiverClients.add(xceiverClientManager.acquireClient(p));
+        }
+        if (pipelines.isEmpty()){
+          throw new IllegalArgumentException(
+              "Couldn't find the any/the selected pipeline");
+        }
       }
-
-      try (XceiverClientManager xceiverClientManager =
-               new XceiverClientManager(ozoneConf)) {
-        xceiverClientSpi = xceiverClientManager.acquireClient(pipeline);
-
-        timer = getMetrics().timer("chunk-write");
-
-        byte[] data = RandomStringUtils.randomAscii(chunkSize)
-            .getBytes(StandardCharsets.UTF_8);
-
-        dataToWrite = ByteString.copyFrom(data);
-
-        Checksum checksum = new Checksum(ChecksumType.CRC32, chunkSize);
-        checksumProtobuf = checksum.computeChecksum(data).getProtoBufMessage();
-
-        runTests(this::writeChunk);
-      }
+      runTest();
     } finally {
-      if (xceiverClientSpi != null) {
-        xceiverClientSpi.close();
+      for (XceiverClientSpi xceiverClientSpi : xceiverClients) {
+        if (xceiverClientSpi != null) {
+          xceiverClientSpi.close();
+        }
       }
     }
     return null;
   }
 
+  private boolean pipelineContainsDatanode(Pipeline p,
+      List<String> datanodeHosts) {
+    for (DatanodeDetails dn:p.getNodes()){
+      if (datanodeHosts.contains(dn.getHostName())){
+        return true;
+      }
+    }
+    return false;
+  }
+
+  private boolean arePipelinesOrDatanodesProvided() {
+    return !(pipelineIds.equals("") && datanodes.equals(""));
+  }
+
+
+  private void runTest()
+      throws IOException {
+
+    timer = getMetrics().timer("chunk-write");
+
+    byte[] data = RandomStringUtils.randomAscii(chunkSize)
+        .getBytes(StandardCharsets.UTF_8);
+
+    dataToWrite = ByteString.copyFrom(data);
+
+    Checksum checksum = new Checksum(ChecksumType.CRC32, chunkSize);
+    checksumProtobuf = checksum.computeChecksum(data).getProtoBufMessage();
+
+    runTests(this::writeChunk);
+  }
+
   private void writeChunk(long stepNo)
       throws Exception {
 
@@ -165,7 +221,19 @@
             .setChunkData(chunkInfo)
             .setData(dataToWrite);
 
-    String id = xceiverClientSpi.getPipeline().getFirstNode().getUuidString();
+    XceiverClientSpi clientSpi = xceiverClients.get(
+        (int) (stepNo%(xceiverClients.size())));
+    sendWriteChunkRequest(blockId, writeChunkRequest,
+        clientSpi);
+
+  }
+
+  private void sendWriteChunkRequest(DatanodeBlockID blockId,
+      WriteChunkRequestProto.Builder writeChunkRequest,
+      XceiverClientSpi xceiverClientSpi) throws Exception {
+    DatanodeDetails datanodeDetails = xceiverClientSpi.
+        getPipeline().getFirstNode();
+    String id = datanodeDetails.getUuidString();
 
     ContainerCommandRequestProto.Builder builder =
         ContainerCommandRequestProto
@@ -188,7 +256,6 @@
       }
       return null;
     });
-
   }
 
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/Freon.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/Freon.java
index 1b03540..d3c5ae6 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/Freon.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/Freon.java
@@ -52,7 +52,8 @@
         DatanodeBlockPutter.class,
         FollowerAppendLogEntryGenerator.class,
         ChunkManagerDiskWrite.class,
-        LeaderAppendLogEntryGenerator.class},
+        LeaderAppendLogEntryGenerator.class,
+        ClosedContainerReplicator.class},
     versionProvider = HddsVersionProvider.class,
     mixinStandardHelpOptions = true)
 public class Freon extends GenericCli {
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/HadoopFsGenerator.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/HadoopFsGenerator.java
index 925ba7d..1f0c3e9 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/HadoopFsGenerator.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/HadoopFsGenerator.java
@@ -16,6 +16,8 @@
  */
 package org.apache.hadoop.ozone.freon;
 
+import java.io.IOException;
+import java.io.UncheckedIOException;
 import java.net.URI;
 import java.util.concurrent.Callable;
 
@@ -26,8 +28,6 @@
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 
 import com.codahale.metrics.Timer;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 import picocli.CommandLine.Command;
 import picocli.CommandLine.Option;
 
@@ -43,9 +43,6 @@
 public class HadoopFsGenerator extends BaseFreonGenerator
     implements Callable<Void> {
 
-  private static final Logger LOG =
-      LoggerFactory.getLogger(HadoopFsGenerator.class);
-
   @Option(names = {"--path"},
       description = "Hadoop FS file system path",
       defaultValue = "o3fs://bucket1.vol1")
@@ -70,16 +67,26 @@
 
   private Timer timer;
 
-  private FileSystem fileSystem;
+  private OzoneConfiguration configuration;
+  private URI uri;
+  private final ThreadLocal<FileSystem> threadLocalFileSystem =
+      ThreadLocal.withInitial(this::createFS);
 
   @Override
   public Void call() throws Exception {
-
     init();
 
-    OzoneConfiguration configuration = createOzoneConfiguration();
+    configuration = createOzoneConfiguration();
+    uri = URI.create(rootPath);
+    String disableCacheName = String.format("fs.%s.impl.disable.cache",
+        uri.getScheme());
+    print("Disabling FS cache: " + disableCacheName);
+    configuration.setBoolean(disableCacheName, true);
 
-    fileSystem = FileSystem.get(URI.create(rootPath), configuration);
+    Path file = new Path(rootPath + "/" + generateObjectName(0));
+    try (FileSystem fileSystem = threadLocalFileSystem.get()) {
+      fileSystem.mkdirs(file.getParent());
+    }
 
     contentGenerator =
         new ContentGenerator(fileSize, bufferSize, copyBufferSize);
@@ -93,7 +100,7 @@
 
   private void createFile(long counter) throws Exception {
     Path file = new Path(rootPath + "/" + generateObjectName(counter));
-    fileSystem.mkdirs(file.getParent());
+    FileSystem fileSystem = threadLocalFileSystem.get();
 
     timer.time(() -> {
       try (FSDataOutputStream output = fileSystem.create(file)) {
@@ -102,4 +109,22 @@
       return null;
     });
   }
+
+  private FileSystem createFS() {
+    try {
+      return FileSystem.get(uri, configuration);
+    } catch (IOException e) {
+      throw new UncheckedIOException(e);
+    }
+  }
+
+  @Override
+  protected void taskLoopCompleted() {
+    FileSystem fileSystem = threadLocalFileSystem.get();
+    try {
+      fileSystem.close();
+    } catch (IOException e) {
+      throw new UncheckedIOException(e);
+    }
+  }
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreReads.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreReads.java
index bf40ebc..b810da2 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreReads.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreReads.java
@@ -27,7 +27,7 @@
 import org.openjdk.jmh.infra.Blackhole;
 
 import java.io.IOException;
-import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 
 import static org.apache.hadoop.ozone.genesis.GenesisUtil.CACHE_10MB_TYPE;
 import static org.apache.hadoop.ozone.genesis.GenesisUtil.CACHE_1GB_TYPE;
@@ -52,9 +52,9 @@
   public void initialize() throws IOException {
     store = GenesisUtil.getMetadataStore(this.type);
     byte[] data = RandomStringUtils.randomAlphanumeric(DATA_LEN)
-        .getBytes(Charset.forName("UTF-8"));
+        .getBytes(StandardCharsets.UTF_8);
     for (int x = 0; x < MAX_KEYS; x++) {
-      store.put(Long.toHexString(x).getBytes(Charset.forName("UTF-8")), data);
+      store.put(Long.toHexString(x).getBytes(StandardCharsets.UTF_8), data);
     }
     if (type.compareTo(CLOSED_TYPE) == 0) {
       store.compactDB();
@@ -65,6 +65,6 @@
   public void test(Blackhole bh) throws IOException {
     long x = org.apache.commons.lang3.RandomUtils.nextLong(0L, MAX_KEYS);
     bh.consume(
-        store.get(Long.toHexString(x).getBytes(Charset.forName("UTF-8"))));
+        store.get(Long.toHexString(x).getBytes(StandardCharsets.UTF_8)));
   }
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreWrites.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreWrites.java
index aa7aedd..51010ec 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreWrites.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkMetadataStoreWrites.java
@@ -26,7 +26,7 @@
 import org.openjdk.jmh.annotations.State;
 
 import java.io.IOException;
-import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 
 import static org.apache.hadoop.ozone.genesis.GenesisUtil.CACHE_10MB_TYPE;
 import static org.apache.hadoop.ozone.genesis.GenesisUtil.CACHE_1GB_TYPE;
@@ -50,13 +50,13 @@
   @Setup
   public void initialize() throws IOException {
     data = RandomStringUtils.randomAlphanumeric(DATA_LEN)
-        .getBytes(Charset.forName("UTF-8"));
+        .getBytes(StandardCharsets.UTF_8);
     store = GenesisUtil.getMetadataStore(this.type);
   }
 
   @Benchmark
   public void test() throws IOException {
     long x = org.apache.commons.lang3.RandomUtils.nextLong(0L, MAX_KEYS);
-    store.put(Long.toHexString(x).getBytes(Charset.forName("UTF-8")), data);
+    store.put(Long.toHexString(x).getBytes(StandardCharsets.UTF_8), data);
   }
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkRocksDbStore.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkRocksDbStore.java
index daf44ec..9f79b82 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkRocksDbStore.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkRocksDbStore.java
@@ -28,7 +28,7 @@
 
 import java.io.File;
 import java.io.IOException;
-import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 import java.nio.file.Paths;
 
 /**
@@ -68,7 +68,7 @@
   @Setup(Level.Trial)
   public void initialize() throws IOException {
     data = RandomStringUtils.randomAlphanumeric(DATA_LEN)
-        .getBytes(Charset.forName("UTF-8"));
+        .getBytes(StandardCharsets.UTF_8);
     org.rocksdb.Options opts = new org.rocksdb.Options();
     File dbFile = Paths.get(System.getProperty(TMP_DIR))
         .resolve(RandomStringUtils.randomNumeric(DB_FILE_LEN))
@@ -112,8 +112,8 @@
   @Benchmark
   public void test(Blackhole bh) throws IOException {
     long x = org.apache.commons.lang3.RandomUtils.nextLong(0L, MAX_KEYS);
-    store.put(Long.toHexString(x).getBytes(Charset.forName("UTF-8")), data);
+    store.put(Long.toHexString(x).getBytes(StandardCharsets.UTF_8), data);
     bh.consume(
-        store.get(Long.toHexString(x).getBytes(Charset.forName("UTF-8"))));
+        store.get(Long.toHexString(x).getBytes(StandardCharsets.UTF_8)));
   }
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java
index 76b32b2..71039f4 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java
@@ -45,7 +45,6 @@
 
 import java.io.File;
 import java.io.IOException;
-import java.nio.charset.Charset;
 import java.nio.file.Files;
 import java.nio.file.Path;
 import java.nio.file.Paths;
@@ -72,7 +71,6 @@
 
   private Options options;
   private BasicParser parser;
-  private final Charset encoding = Charset.forName("UTF-8");
   private final OzoneConfiguration conf;
 
   // for container.db
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/ClearSpaceQuotaOptions.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/ClearSpaceQuotaOptions.java
index af304c0..8653c73 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/ClearSpaceQuotaOptions.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/ClearSpaceQuotaOptions.java
@@ -28,16 +28,16 @@
       description = "clear space quota")
   private boolean clrSpaceQuota;
 
-  @CommandLine.Option(names = {"--count-quota"},
-      description = "clear count quota")
-  private boolean clrCountQuota;
+  @CommandLine.Option(names = {"--namespace-quota"},
+      description = "clear namespace quota")
+  private boolean clrNamespaceQuota;
 
   public boolean getClrSpaceQuota() {
     return clrSpaceQuota;
   }
 
-  public boolean getClrCountQuota() {
-    return clrCountQuota;
+  public boolean getClrNamespaceQuota() {
+    return clrNamespaceQuota;
   }
 
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/SetSpaceQuotaOptions.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/SetSpaceQuotaOptions.java
index 8dea3a9..6030b85 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/SetSpaceQuotaOptions.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/SetSpaceQuotaOptions.java
@@ -29,17 +29,17 @@
       description = "The maximum space quota can be used (eg. 1GB)")
   private String quotaInBytes;
 
-  @CommandLine.Option(names = {"--count-quota"},
+  @CommandLine.Option(names = {"--namespace-quota"},
       description = "For volume this parameter represents the number of " +
           "buckets, and for buckets represents the number of keys (eg. 5)")
-  private long quotaInCounts;
+  private long quotaInNamespace;
 
   public String getQuotaInBytes() {
     return quotaInBytes;
   }
 
-  public long getQuotaInCounts() {
-    return quotaInCounts;
+  public long getQuotaInNamespace() {
+    return quotaInNamespace;
   }
 
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/ClearQuotaHandler.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/ClearQuotaHandler.java
index 5e63a89..160475e 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/ClearQuotaHandler.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/ClearQuotaHandler.java
@@ -48,8 +48,8 @@
     if (clrSpaceQuota.getClrSpaceQuota()) {
       bucket.clearSpaceQuota();
     }
-    if (clrSpaceQuota.getClrCountQuota()) {
-      bucket.clearCountQuota();
+    if (clrSpaceQuota.getClrNamespaceQuota()) {
+      bucket.clearNamespaceQuota();
     }
   }
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/CreateBucketHandler.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/CreateBucketHandler.java
index 9b281cd..b8f2072 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/CreateBucketHandler.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/CreateBucketHandler.java
@@ -82,10 +82,10 @@
 
     if (quotaOptions.getQuotaInBytes() != null) {
       bb.setQuotaInBytes(OzoneQuota.parseQuota(quotaOptions.getQuotaInBytes(),
-          quotaOptions.getQuotaInCounts()).getQuotaInBytes());
+          quotaOptions.getQuotaInNamespace()).getQuotaInBytes());
     }
 
-    bb.setQuotaInCounts(quotaOptions.getQuotaInCounts());
+    bb.setQuotaInNamespace(quotaOptions.getQuotaInNamespace());
 
     String volumeName = address.getVolumeName();
     String bucketName = address.getBucketName();
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/SetQuotaHandler.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/SetQuotaHandler.java
index 91a62e1..21d0321 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/SetQuotaHandler.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/SetQuotaHandler.java
@@ -46,17 +46,17 @@
     OzoneBucket bucket = client.getObjectStore().getVolume(volumeName)
         .getBucket(bucketName);
     long spaceQuota = bucket.getQuotaInBytes();
-    long countQuota = bucket.getQuotaInCounts();
+    long namespaceQuota = bucket.getQuotaInNamespace();
 
     if (quotaOptions.getQuotaInBytes() != null
         && !quotaOptions.getQuotaInBytes().isEmpty()) {
       spaceQuota = OzoneQuota.parseQuota(quotaOptions.getQuotaInBytes(),
-          quotaOptions.getQuotaInCounts()).getQuotaInBytes();
+          quotaOptions.getQuotaInNamespace()).getQuotaInBytes();
     }
-    if (quotaOptions.getQuotaInCounts() >= 0) {
-      countQuota = quotaOptions.getQuotaInCounts();
+    if (quotaOptions.getQuotaInNamespace() >= 0) {
+      namespaceQuota = quotaOptions.getQuotaInNamespace();
     }
 
-    bucket.setQuota(OzoneQuota.getOzoneQuota(spaceQuota, countQuota));
+    bucket.setQuota(OzoneQuota.getOzoneQuota(spaceQuota, namespaceQuota));
   }
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/ClearQuotaHandler.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/ClearQuotaHandler.java
index 72ae903..fc5dc96 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/ClearQuotaHandler.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/ClearQuotaHandler.java
@@ -46,8 +46,8 @@
     if (clrSpaceQuota.getClrSpaceQuota()) {
       volume.clearSpaceQuota();
     }
-    if (clrSpaceQuota.getClrCountQuota()) {
-      volume.clearCountQuota();
+    if (clrSpaceQuota.getClrNamespaceQuota()) {
+      volume.clearNamespaceQuota();
     }
   }
 }
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/CreateVolumeHandler.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/CreateVolumeHandler.java
index dca24e3..f5bd4e8 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/CreateVolumeHandler.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/CreateVolumeHandler.java
@@ -62,10 +62,10 @@
     if (quotaOptions.getQuotaInBytes() != null) {
       volumeArgsBuilder.setQuotaInBytes(OzoneQuota.parseQuota(
           quotaOptions.getQuotaInBytes(),
-          quotaOptions.getQuotaInCounts()).getQuotaInBytes());
+          quotaOptions.getQuotaInNamespace()).getQuotaInBytes());
     }
 
-    volumeArgsBuilder.setQuotaInCounts(quotaOptions.getQuotaInCounts());
+    volumeArgsBuilder.setQuotaInNamespace(quotaOptions.getQuotaInNamespace());
 
     client.getObjectStore().createVolume(volumeName,
         volumeArgsBuilder.build());
diff --git a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/SetQuotaHandler.java b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/SetQuotaHandler.java
index a15abba..23aa333 100644
--- a/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/SetQuotaHandler.java
+++ b/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/SetQuotaHandler.java
@@ -45,17 +45,17 @@
     OzoneVolume volume = client.getObjectStore().getVolume(volumeName);
 
     long spaceQuota = volume.getQuotaInBytes();
-    long countQuota = volume.getQuotaInCounts();
+    long namespaceQuota = volume.getQuotaInNamespace();
 
     if (quotaOptions.getQuotaInBytes() != null
         && !quotaOptions.getQuotaInBytes().isEmpty()) {
       spaceQuota = OzoneQuota.parseQuota(quotaOptions.getQuotaInBytes(),
-          quotaOptions.getQuotaInCounts()).getQuotaInBytes();
+          quotaOptions.getQuotaInNamespace()).getQuotaInBytes();
     }
-    if (quotaOptions.getQuotaInCounts() >= 0) {
-      countQuota = quotaOptions.getQuotaInCounts();
+    if (quotaOptions.getQuotaInNamespace() >= 0) {
+      namespaceQuota = quotaOptions.getQuotaInNamespace();
     }
 
-    volume.setQuota(OzoneQuota.getOzoneQuota(spaceQuota, countQuota));
+    volume.setQuota(OzoneQuota.getOzoneQuota(spaceQuota, namespaceQuota));
   }
 }
diff --git a/pom.xml b/pom.xml
index 16b3ac6..0e108c5 100644
--- a/pom.xml
+++ b/pom.xml
@@ -79,10 +79,10 @@
     <declared.ozone.version>${ozone.version}</declared.ozone.version>
 
     <!-- Apache Ratis version -->
-    <ratis.version>1.1.0-c5eafb9-SNAPSHOT</ratis.version>
+    <ratis.version>1.1.0-0bdf24f-SNAPSHOT</ratis.version>
 
     <!-- Apache Ratis thirdparty version -->
-    <ratis.thirdparty.version>0.6.0-SNAPSHOT</ratis.thirdparty.version>
+    <ratis.thirdparty.version>0.6.0</ratis.thirdparty.version>
 
     <distMgmtSnapshotsId>apache.snapshots.https</distMgmtSnapshotsId>
     <distMgmtSnapshotsName>Apache Development Snapshot Repository</distMgmtSnapshotsName>
@@ -117,7 +117,7 @@
 
     <java.security.egd>file:///dev/urandom</java.security.egd>
 
-    <bouncycastle.version>1.60</bouncycastle.version>
+    <bouncycastle.version>1.67</bouncycastle.version>
 
     <!-- jersey version -->
     <jersey.version>1.19</jersey.version>
@@ -138,7 +138,7 @@
     <httpcore.version>4.4.13</httpcore.version>
 
     <!-- SLF4J/LOG4J version -->
-    <slf4j.version>1.7.25</slf4j.version>
+    <slf4j.version>1.7.30</slf4j.version>
     <log4j.version>1.2.17</log4j.version>
     <log4j2.version>2.13.3</log4j2.version>
     <disruptor.version>3.4.2</disruptor.version>
@@ -178,11 +178,11 @@
 
     <!-- Maven protoc compiler -->
     <protobuf-maven-plugin.version>0.5.1</protobuf-maven-plugin.version>
-    <protobuf-compile.version>3.11.0</protobuf-compile.version>
-    <grpc-compile.version>1.29.0</grpc-compile.version>
+    <protobuf-compile.version>3.12.0</protobuf-compile.version>
+    <grpc-compile.version>1.33.0</grpc-compile.version>
     <os-maven-plugin.version>1.5.0.Final</os-maven-plugin.version>
 
-    <netty.version>4.1.48.Final</netty.version>
+    <netty.version>4.1.51.Final</netty.version>
 
     <!-- define the Java language version used by the compiler -->
     <javac.version>1.8</javac.version>
@@ -205,7 +205,7 @@
     <maven-compiler-plugin.version>3.1</maven-compiler-plugin.version>
     <maven-install-plugin.version>2.5.1</maven-install-plugin.version>
     <maven-resources-plugin.version>3.1.0</maven-resources-plugin.version>
-    <maven-shade-plugin.version>3.2.0</maven-shade-plugin.version>
+    <maven-shade-plugin.version>3.2.4</maven-shade-plugin.version>
     <maven-jar-plugin.version>2.5</maven-jar-plugin.version>
     <maven-war-plugin.version>3.1.0</maven-war-plugin.version>
     <maven-source-plugin.version>2.3</maven-source-plugin.version>
@@ -224,7 +224,7 @@
     <maven-checkstyle-plugin.version>3.1.0</maven-checkstyle-plugin.version>
     <checkstyle.version>8.29</checkstyle.version>
     <surefire.fork.timeout>1200</surefire.fork.timeout>
-    <aws-java-sdk.version>1.11.615</aws-java-sdk.version>
+    <aws-java-sdk.version>1.11.901</aws-java-sdk.version>
     <hsqldb.version>2.3.4</hsqldb.version>
     <frontend-maven-plugin.version>1.10.0</frontend-maven-plugin.version>
     <!-- the version of Hadoop declared in the version resources; can be overridden
@@ -233,7 +233,7 @@
     <proto-backwards-compatibility.version>1.0.5</proto-backwards-compatibility.version>
 
     <swagger-annotations-version>1.5.4</swagger-annotations-version>
-    <snakeyaml.version>1.16</snakeyaml.version>
+    <snakeyaml.version>1.26</snakeyaml.version>
     <sonar.java.binaries>${basedir}/target/classes</sonar.java.binaries>
   </properties>
 
diff --git a/tools/fault-injection-service/README.md b/tools/fault-injection-service/README.md
index d988107..7fffd23 100644
--- a/tools/fault-injection-service/README.md
+++ b/tools/fault-injection-service/README.md
@@ -47,7 +47,7 @@
     - Injecting delays on various filesystem interfaces.
     - Injecting a specific failure on a specific path for a specific operation.
     - Simulate temporary or on-disk data corruption on IO path.
-    - Reseting specific or all the failures injected so far.
+    - Resetting specific or all the failures injected so far.
 
 - some unit test binaries