IMPALA-9110: Add table loading time break-down metrics for HdfsTable

A. Problem:
Catalog table loading currently only records the total loading
time. We will need some break-down times, i.e. more detailed
time recording on each loading function. Also, the table schema
loading is not taken into account for load-duration. We will need
to add some more metrics for that.

B. Solution:
- We added "hms-load-tbl-schema", "load-duration.all-column-stats",
"load-duration.all-partitions.total-time",
"load-duration.all-partitions.file-metadata".
Also, we logged the loadValidWriteIdList() time. So now we have
a more detailed breakdown time for table loading info.

The table loading time metrics for HDFS tables are in the following hierarchy:
- Table Schema Loading
- Table Metadata Loading - total time
    - all column stats loading time
    - ValidWriteIds loading time
    - all partitions loading time - total time
        - file metadata loading time
    - storage-metadata-loading-time(standalone metric)

1. Table Schema Loading:
* Meaning: The time for HMS to fetch table object and the real schema loading time.
Normally, the code path is "msClient.getHiveClient().getTable(dbName, tblName)"
* Metric : hms-load-tbl-schema

2. Table Metadata Loading -- total time
* Meaning: The time to load all the table metadata.
The code path is load() function in HdfsTable.load() function.
* Metric: load-duration.total-time

2.1 Table Metadata Loading -- all column stats
* Meaning: load all column stats, this is part of table metadata loading
The code path is HdfsTable.loadAllColumnStats()
* Metric: load-duration.all-column-stats

2.2 Table Metadata Loading -- loadValidWriteIdList
* Meaning: fetch ValidWriteIds from HMS
The code path is HdfsTable.loadValidWriteIdList()
* Metric: no metric recorded for this one. Instead, a debug log is
generated.

2.3 Table Metadata Loading -- storage metadata loading(standalone metric)
* Meaning: Storage related to file system operations during metadata
loading.(The amount of time spent loading metadata from the underlying storage layer.)
* Metric: we rename it to load-duration.storage-metadata. This is a metric introduced by
IMPALA-7322

2.4 Table Metadata Loading -- load all partitions
* Meaning: Load all partitions time, including fetching all partitions
from HMS and loading all partitions. The code path is
MetaStoreUtil.fetchAllPartitions() and HdfsTable.loadAllPartitions()
* Metric: load-duration.all-partitions

2.4.1 Table Metadata Loading -- load all partitions -- load file metadata
* Meaning: The file metadata loading for all all partitions. (This is
part of 2.4). Code path: loadFileMetadataForPartitions() inside
loadAllPartitions()
* Metric: load-duration.all-partitions.file-metadata

C. Extra thing in this commit:
1. Add PrintUtils.printTimeNs for PrettyPrint time in FrontEnd
2. Add explanation for table loading manager

D. Test:
1. Add Unit tests for PrintUtils.printTime() function
2. Manual describe table and verify the table loading metrics are
correct.

Change-Id: I5381f9316df588b2004876c6cd9fb7e674085b10
Reviewed-on: http://gerrit.cloudera.org:8080/14611
Reviewed-by: Vihang Karajgaonkar <vihang@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
diff --git a/fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java b/fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java
index a2f4e75..74a39e4 100644
--- a/fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java
+++ b/fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java
@@ -35,6 +35,7 @@
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
+import com.google.common.base.Stopwatch;
 import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.protocol.CachePoolEntry;
@@ -1546,6 +1547,8 @@
           try (ThreadNameAnnotator tna = new ThreadNameAnnotator(annotation)) {
             dbName = dbName.toLowerCase();
             Db oldDb = oldDbCache.get(dbName);
+            // invalidateDb() will return empty table list
+            // if loadInBackground_ is set to false
             Pair<Db, List<TTableName>> invalidatedDb = invalidateDb(msClient,
                 dbName, oldDb);
             if (invalidatedDb == null) continue;
@@ -2028,12 +2031,17 @@
       versionLock_.writeLock().unlock();
       try (MetaStoreClient msClient = getMetaStoreClient()) {
         org.apache.hadoop.hive.metastore.api.Table msTbl = null;
+        Stopwatch hmsLoadSW = new Stopwatch().start();
+        long hmsLoadTime;
         try {
           msTbl = msClient.getHiveClient().getTable(dbName, tblName);
         } catch (Exception e) {
           throw new TableLoadingException("Error loading metadata for table: " +
               dbName + "." + tblName, e);
+        } finally {
+          hmsLoadTime = hmsLoadSW.elapsed(TimeUnit.NANOSECONDS);
         }
+        tbl.updateHMSLoadTableSchemaTime(hmsLoadTime);
         tbl.load(true, msClient.getHiveClient(), msTbl, reason);
       }
       tbl.setCatalogVersion(newCatalogVersion);
diff --git a/fe/src/main/java/org/apache/impala/catalog/HBaseTable.java b/fe/src/main/java/org/apache/impala/catalog/HBaseTable.java
index 4da0371..2a1a5cc 100644
--- a/fe/src/main/java/org/apache/impala/catalog/HBaseTable.java
+++ b/fe/src/main/java/org/apache/impala/catalog/HBaseTable.java
@@ -104,7 +104,7 @@
     try (Timer.Context timer = getMetrics().getTimer(Table.LOAD_DURATION_METRIC).time()) {
       msTable_ = msTbl;
       final Timer.Context storageLoadTimer =
-          getMetrics().getTimer(Table.STORAGE_METADATA_LOAD_DURATION_METRIC).time();
+          getMetrics().getTimer(Table.LOAD_DURATION_STORAGE_METADATA).time();
       List<Column> cols;
       try {
         hbaseTableName_ = Util.getHBaseTableName(getMetaStoreTable());
diff --git a/fe/src/main/java/org/apache/impala/catalog/HdfsTable.java b/fe/src/main/java/org/apache/impala/catalog/HdfsTable.java
index 96856a8..3a6d71f 100644
--- a/fe/src/main/java/org/apache/impala/catalog/HdfsTable.java
+++ b/fe/src/main/java/org/apache/impala/catalog/HdfsTable.java
@@ -160,6 +160,18 @@
   public static final String MEMORY_ESTIMATE_METRIC = "memory-estimate-bytes";
   public static final String HAS_INCREMENTAL_STATS_METRIC = "has-incremental-stats";
 
+  // Load all partitions time, including fetching all partitions
+  // from HMS and loading all partitions. The code path is
+  // MetaStoreUtil.fetchAllPartitions() and HdfsTable.loadAllPartitions()
+  public static final String LOAD_DURATION_ALL_PARTITIONS =
+      "load-duration.all-partitions";
+
+  // The file metadata loading for all all partitions. This is part of
+  // LOAD_DURATION_ALL_PARTITIONS
+  // Code path: loadFileMetadataForPartitions() inside loadAllPartitions()
+  public static final String LOAD_DURATION_FILE_METADATA_ALL_PARTITIONS =
+      "load-duration.all-partitions.file-metadata";
+
   // string to indicate NULL. set in load() from table properties
   private String nullColumnValue_;
 
@@ -557,7 +569,10 @@
       }
     }
     // Load the file metadata from scratch.
+    Timer.Context fileMetadataLdContext = getMetrics().getTimer(
+        HdfsTable.LOAD_DURATION_FILE_METADATA_ALL_PARTITIONS).time();
     loadFileMetadataForPartitions(client, partitionMap_.values(), /*isRefresh=*/false);
+    fileMetadataLdContext.stop();
     return clock.getTick() - startTime;
   }
 
@@ -581,6 +596,8 @@
    */
   private void loadFileMetadataForPartitions(IMetaStoreClient client,
       Iterable<HdfsPartition> parts, boolean isRefresh) throws CatalogException {
+    final Clock clock = Clock.defaultClock();
+    long startTime = clock.getTick();
     // Group the partitions by their path (multiple partitions may point to the same
     // path).
     Map<Path, List<HdfsPartition>> partsByPath = Maps.newHashMap();
@@ -646,7 +663,6 @@
     }
 
     // TODO(todd): would be good to log a summary of the loading process:
-    // - how long did it take?
     // - how many block locations did we reuse/load individually/load via batch
     // - how many partitions did we read metadata for
     // - etc...
@@ -657,8 +673,9 @@
           Iterables.size(parts) - 3);
     }
 
-    LOG.info("Loaded file and block metadata for {} partitions: {}", getFullName(),
-        partNames);
+    long duration = clock.getTick() - startTime;
+    LOG.info("Loaded file and block metadata for {} partitions: {}. Time taken: {}",
+        getFullName(), partNames, PrintUtils.printTimeNs(duration));
   }
 
   /**
@@ -963,7 +980,7 @@
         msTbl.getDbName(), msTbl.getTableName(), reason);
     LOG.info(annotation);
     final Timer storageLdTimer =
-        getMetrics().getTimer(Table.STORAGE_METADATA_LOAD_DURATION_METRIC);
+        getMetrics().getTimer(Table.LOAD_DURATION_STORAGE_METADATA);
     storageMetadataLoadTime_ = 0;
     try (ThreadNameAnnotator tna = new ThreadNameAnnotator(annotation)) {
       // turn all exceptions into TableLoadingException
@@ -995,12 +1012,15 @@
           LOG.info("Incrementally loaded table metadata for: " + getFullName());
         } else {
           LOG.info("Fetching partition metadata from the Metastore: " + getFullName());
+          final Timer.Context allPartitionsLdContext =
+              getMetrics().getTimer(HdfsTable.LOAD_DURATION_ALL_PARTITIONS).time();
           // Load all partitions from Hive Metastore, including file metadata.
           List<org.apache.hadoop.hive.metastore.api.Partition> msPartitions =
               MetaStoreUtil.fetchAllPartitions(
                   client, db_.getName(), name_, NUM_PARTITION_FETCH_RETRIES);
           LOG.info("Fetched partition metadata from the Metastore: " + getFullName());
           storageMetadataLoadTime_ = loadAllPartitions(client, msPartitions, msTbl);
+          allPartitionsLdContext.stop();
         }
         if (loadTableSchema) setAvroSchema(client, msTbl);
         setTableStats(msTbl);
@@ -1017,7 +1037,7 @@
       long load_time_duration = context.stop();
       if (load_time_duration > LOADING_WARNING_TIME_NS) {
         LOG.warn("Time taken on loading table " + getFullName() + " exceeded " +
-            "warning threshold. Time: " + load_time_duration + " ns");
+            "warning threshold. Time: " + PrintUtils.printTimeNs(load_time_duration));
       }
       updateTableLoadingTime();
     }
diff --git a/fe/src/main/java/org/apache/impala/catalog/KuduTable.java b/fe/src/main/java/org/apache/impala/catalog/KuduTable.java
index 9c5b3a6..4ef5d95 100644
--- a/fe/src/main/java/org/apache/impala/catalog/KuduTable.java
+++ b/fe/src/main/java/org/apache/impala/catalog/KuduTable.java
@@ -335,7 +335,7 @@
       setTableStats(msTable_);
       // Load metadata from Kudu
       final Timer.Context ctxStorageLdTime =
-          getMetrics().getTimer(Table.STORAGE_METADATA_LOAD_DURATION_METRIC).time();
+          getMetrics().getTimer(Table.LOAD_DURATION_STORAGE_METADATA).time();
       try {
         loadSchemaFromKudu();
       } catch (ImpalaRuntimeException e) {
diff --git a/fe/src/main/java/org/apache/impala/catalog/Table.java b/fe/src/main/java/org/apache/impala/catalog/Table.java
index a8587c6..ee6e92a 100644
--- a/fe/src/main/java/org/apache/impala/catalog/Table.java
+++ b/fe/src/main/java/org/apache/impala/catalog/Table.java
@@ -24,6 +24,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.locks.ReentrantLock;
 
@@ -36,11 +37,12 @@
 import org.apache.hadoop.hive.metastore.api.SQLForeignKey;
 import org.apache.hadoop.hive.metastore.api.SQLPrimaryKey;
 import org.apache.impala.analysis.TableName;
-import org.apache.impala.compat.MetastoreShim;
 import org.apache.impala.common.ImpalaRuntimeException;
 import org.apache.impala.common.Metrics;
 import org.apache.impala.common.Pair;
+import org.apache.impala.common.PrintUtils;
 import org.apache.impala.common.RuntimeEnv;
+import org.apache.impala.compat.MetastoreShim;
 import org.apache.impala.service.MetadataOp;
 import org.apache.impala.thrift.TAccessLevel;
 import org.apache.impala.thrift.TCatalogObject;
@@ -58,7 +60,9 @@
 import org.apache.impala.util.HdfsCachingUtil;
 import org.apache.log4j.Logger;
 
+import com.codahale.metrics.Timer;
 import com.google.common.base.Preconditions;
+import com.google.common.base.Stopwatch;
 import com.google.common.collect.ImmutableList;
 import com.google.common.collect.Lists;
 
@@ -149,9 +153,23 @@
   // Table can define additional metrics specific to that table type.
   public static final String REFRESH_DURATION_METRIC = "refresh-duration";
   public static final String ALTER_DURATION_METRIC = "alter-duration";
+
+  // The time to load all the table metadata.
   public static final String LOAD_DURATION_METRIC = "load-duration";
-  public static final String STORAGE_METADATA_LOAD_DURATION_METRIC =
-      "storage-metadata-load-duration";
+
+  // Storage related to file system operations during metadata loading.
+  // The amount of time spent loading metadata from the underlying storage layer.
+  public static final String LOAD_DURATION_STORAGE_METADATA =
+      "load-duration.storage-metadata";
+
+  // The time for HMS to fetch table object and the real schema loading time.
+  // Normally, the code path is "msClient.getHiveClient().getTable(dbName, tblName)"
+  public static final String HMS_LOAD_TBL_SCHEMA = "hms-load-tbl-schema";
+
+  // Load all column stats, this is part of table metadata loading
+  // The code path is HdfsTable.loadAllColumnStats()
+  public static final String LOAD_DURATION_ALL_COLUMN_STATS =
+      "load-duration.all-column-stats";
 
   // Table property key for storing the time of the last DDL operation.
   public static final String TBL_PROP_LAST_DDL_TIME = "transient_lastDdlTime";
@@ -241,7 +259,9 @@
     metrics_.addTimer(REFRESH_DURATION_METRIC);
     metrics_.addTimer(ALTER_DURATION_METRIC);
     metrics_.addTimer(LOAD_DURATION_METRIC);
-    metrics_.addTimer(STORAGE_METADATA_LOAD_DURATION_METRIC);
+    metrics_.addTimer(LOAD_DURATION_STORAGE_METADATA);
+    metrics_.addTimer(HMS_LOAD_TBL_SCHEMA);
+    metrics_.addTimer(LOAD_DURATION_ALL_COLUMN_STATS);
   }
 
   public Metrics getMetrics() { return metrics_; }
@@ -262,6 +282,11 @@
     return lastUsedTime_;
   }
 
+  public void updateHMSLoadTableSchemaTime(long hmsLoadTimeNS) {
+    this.metrics_.getTimer(Table.HMS_LOAD_TBL_SCHEMA).
+        update(hmsLoadTimeNS, TimeUnit.NANOSECONDS);
+  }
+
   /**
    * Populate members of 'this' from metastore info. If 'reuseMetadata' is true, reuse
    * valid existing metadata.
@@ -310,24 +335,29 @@
    * the correctness of the system.
    */
   protected void loadAllColumnStats(IMetaStoreClient client) {
-    if (LOG.isTraceEnabled()) LOG.trace("Loading column stats for table: " + name_);
-    List<ColumnStatisticsObj> colStats;
-
-    // We need to only query those columns which may have stats; asking HMS for other
-    // columns causes loadAllColumnStats() to return nothing.
-    // TODO(todd): this no longer seems to be true - asking for a non-existent column
-    // is just ignored, and the columns that do exist are returned.
-    List<String> colNames = getColumnNamesWithHmsStats();
-
+    final Timer.Context columnStatsLdContext =
+        getMetrics().getTimer(LOAD_DURATION_ALL_COLUMN_STATS).time();
     try {
-      colStats = MetastoreShim.getTableColumnStatistics(client, db_.getName(), name_,
-          colNames);
-    } catch (Exception e) {
-      LOG.warn("Could not load column statistics for: " + getFullName(), e);
-      return;
-    }
+      if (LOG.isTraceEnabled()) LOG.trace("Loading column stats for table: " + name_);
+      List<ColumnStatisticsObj> colStats;
 
-    FeCatalogUtils.injectColumnStats(colStats, this);
+      // We need to only query those columns which may have stats; asking HMS for other
+      // columns causes loadAllColumnStats() to return nothing.
+      // TODO(todd): this no longer seems to be true - asking for a non-existent column
+      // is just ignored, and the columns that do exist are returned.
+      List<String> colNames = getColumnNamesWithHmsStats();
+
+      try {
+        colStats = MetastoreShim.getTableColumnStatistics(client, db_.getName(), name_,
+            colNames);
+      } catch (Exception e) {
+        LOG.warn("Could not load column statistics for: " + getFullName(), e);
+        return;
+      }
+      FeCatalogUtils.injectColumnStats(colStats, this);
+    } finally {
+      columnStatsLdContext.stop();
+    }
   }
 
   /**
@@ -360,6 +390,7 @@
    */
   protected void loadValidWriteIdList(IMetaStoreClient client)
       throws TableLoadingException {
+    Stopwatch sw = new Stopwatch().start();
     Preconditions.checkState(msTable_ != null && msTable_.getParameters() != null);
     if (MetastoreShim.getMajorVersion() > 2 &&
         AcidUtils.isTransactionalTable(msTable_.getParameters())) {
@@ -367,6 +398,8 @@
     } else {
       validWriteIds_ = null;
     }
+    LOG.debug("Load Valid Write Id List Done. Time taken: " +
+        PrintUtils.printTimeNs(sw.elapsed(TimeUnit.NANOSECONDS)));
   }
 
   /**
diff --git a/fe/src/main/java/org/apache/impala/catalog/TableLoader.java b/fe/src/main/java/org/apache/impala/catalog/TableLoader.java
index db29a7b..634741b 100644
--- a/fe/src/main/java/org/apache/impala/catalog/TableLoader.java
+++ b/fe/src/main/java/org/apache/impala/catalog/TableLoader.java
@@ -64,9 +64,11 @@
          MetaStoreClient msClient = catalog_.getMetaStoreClient()) {
       org.apache.hadoop.hive.metastore.api.Table msTbl = null;
       // All calls to getTable() need to be serialized due to HIVE-5457.
+      Stopwatch hmsLoadSW = new Stopwatch().start();
       synchronized (metastoreAccessLock_) {
         msTbl = msClient.getHiveClient().getTable(db.getName(), tblName);
       }
+      long hmsLoadTime = hmsLoadSW.elapsed(TimeUnit.NANOSECONDS);
       // Check that the Hive TableType is supported
       TableType tableType = TableType.valueOf(msTbl.getTableType());
       if (!MetastoreShim.IMPALA_SUPPORTED_TABLE_TYPES.contains(tableType)) {
@@ -80,6 +82,7 @@
         throw new TableLoadingException(
             "Unrecognized table type for table: " + fullTblName);
       }
+      table.updateHMSLoadTableSchemaTime(hmsLoadTime);
       table.load(false, msClient.getHiveClient(), msTbl, reason);
       table.validate();
     } catch (TableLoadingException e) {
diff --git a/fe/src/main/java/org/apache/impala/catalog/TableLoadingMgr.java b/fe/src/main/java/org/apache/impala/catalog/TableLoadingMgr.java
index e2ad513..0a90e56 100644
--- a/fe/src/main/java/org/apache/impala/catalog/TableLoadingMgr.java
+++ b/fe/src/main/java/org/apache/impala/catalog/TableLoadingMgr.java
@@ -164,8 +164,8 @@
     numLoadingThreads_ = numLoadingThreads;
     tblLoadingPool_ = Executors.newFixedThreadPool(numLoadingThreads_);
 
-    // Start the background table loading threads.
-    startTableLoadingThreads();
+    // Start the background table loading submitter threads.
+    startTableLoadingSubmitterThreads();
 
     // Start the asyncRefreshThread_. Currently used to wait for cache directives to
     // complete in the background.
@@ -255,15 +255,19 @@
   }
 
   /**
-   * Starts table loading threads in a fixed sized thread pool with a size
+   * Starts table loading submitter threads in a fixed sized thread pool with a size
    * defined by NUM_TBL_LOADING_THREADS. Each thread polls the tableLoadingDeque_
-   * for new tables to load.
+   * for new tables to load. Note these threads are just for submitting the
+   * load request, the real table loading threads are in tblLoadingPool_.
+   * There is a discussion here: https://issues.apache.org/jira/browse/IMPALA-9140
+   * which well explained the table loading mechanism.
    */
-  private void startTableLoadingThreads() {
-    ExecutorService loadingPool = Executors.newFixedThreadPool(numLoadingThreads_);
+  private void startTableLoadingSubmitterThreads() {
+    ExecutorService submitterLoadingPool =
+        Executors.newFixedThreadPool(numLoadingThreads_);
     try {
       for (int i = 0; i < numLoadingThreads_; ++i) {
-        loadingPool.execute(new Runnable() {
+        submitterLoadingPool.execute(new Runnable() {
           @Override
           public void run() {
             while (true) {
@@ -278,7 +282,7 @@
         });
       }
     } finally {
-      loadingPool.shutdown();
+      submitterLoadingPool.shutdown();
     }
   }
 
diff --git a/fe/src/main/java/org/apache/impala/common/PrintUtils.java b/fe/src/main/java/org/apache/impala/common/PrintUtils.java
index 57a082d..125135a 100644
--- a/fe/src/main/java/org/apache/impala/common/PrintUtils.java
+++ b/fe/src/main/java/org/apache/impala/common/PrintUtils.java
@@ -211,4 +211,51 @@
     if (!it.hasNext()) return "";
     return "'" + Joiner.on("', '").join(it) + "'";
   }
+
+  public static String printTimeNs(long value) {
+    if (value < 0)
+      return "";
+    if (value >= 1000000000) {
+      return printTimeMs(value/1000000);
+    } else if (value >= 1000000) {
+      return String.format("%.3fms", (double)value/1000000);
+    } else if (value >= 1000) {
+      return String.format("%.3fus", (double)value/1000);
+    } else {
+      return value + "ns";
+    }
+  }
+
+  public static String printTimeMs(long value) {
+    if (value < 0) {
+      return "";
+    }
+    boolean hour = false, minute = false, second = false;
+    StringBuilder sb = new StringBuilder();
+    if (value >= 3600000) {
+      sb.append(value / 3600000).append("h");
+      value %= 3600000;
+      hour = true;
+    }
+    if (value >= 60000) {
+      sb.append(value / 60000).append("m");
+      value %= 60000;
+      minute = true;
+    }
+    if (!hour && value >= 1000) {
+      sb.append(value / 1000).append("s");
+      value %= 1000;
+      second = true;
+    }
+    if (!hour && !minute && value != 0) {
+      String remainTime = String.valueOf(value);
+      if (second) {
+        while (remainTime.length() < 3) {
+          remainTime = "0" + remainTime;
+        }
+      }
+      sb.append(remainTime).append("ms");
+    }
+    return sb.toString();
+  }
 }
diff --git a/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java b/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
index 31c1a2f..3fba93b 100644
--- a/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
+++ b/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
@@ -19,6 +19,7 @@
 
 import static org.apache.impala.analysis.Analyzer.ACCESSTYPE_READWRITE;
 
+import com.google.common.base.Stopwatch;
 import com.google.common.collect.Iterables;
 
 import java.io.ByteArrayOutputStream;
@@ -32,6 +33,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 import java.util.stream.Collectors;
 
 import org.apache.hadoop.fs.FSDataInputStream;
@@ -1735,12 +1737,17 @@
       org.apache.hadoop.hive.metastore.api.Table msTbl = existingTbl.getMetaStoreTable();
       if (msTbl == null) {
         Preconditions.checkState(existingTbl instanceof IncompleteTable);
+        Stopwatch hmsLoadSW = new Stopwatch().start();
+        long hmsLoadTime;
         try (MetaStoreClient msClient = catalog_.getMetaStoreClient()) {
           msTbl = msClient.getHiveClient().getTable(tableName.getDb(),
               tableName.getTbl());
         } catch (TException e) {
           LOG.error(String.format(HMS_RPC_ERROR_FORMAT_STR, "getTable") + e.getMessage());
+        } finally {
+          hmsLoadTime = hmsLoadSW.elapsed(TimeUnit.NANOSECONDS);
         }
+        existingTbl.updateHMSLoadTableSchemaTime(hmsLoadTime);
       }
       boolean isSynchronizedTable = msTbl != null &&
               KuduTable.isKuduTable(msTbl) && KuduTable.isSynchronizedTable(msTbl);
@@ -3833,12 +3840,17 @@
     Preconditions.checkNotNull(msClient);
     Db db = tbl.getDb();
     org.apache.hadoop.hive.metastore.api.Table msTbl = null;
+    Stopwatch hmsLoadSW = new Stopwatch().start();
+    long hmsLoadTime;
     try {
       msTbl = msClient.getHiveClient().getTable(db.getName(), tbl.getName());
     } catch (Exception e) {
       throw new TableLoadingException("Error loading metadata for table: " +
           db.getName() + "." + tbl.getName(), e);
+    } finally {
+      hmsLoadTime = hmsLoadSW.elapsed(TimeUnit.NANOSECONDS);
     }
+    tbl.updateHMSLoadTableSchemaTime(hmsLoadTime);
     return msTbl;
   }
 
diff --git a/fe/src/test/java/org/apache/impala/util/PrintUtilsTest.java b/fe/src/test/java/org/apache/impala/util/PrintUtilsTest.java
index 68b055c..5f8b61c 100644
--- a/fe/src/test/java/org/apache/impala/util/PrintUtilsTest.java
+++ b/fe/src/test/java/org/apache/impala/util/PrintUtilsTest.java
@@ -157,4 +157,28 @@
     assertEquals("'a'", PrintUtils.joinQuoted(ImmutableList.of("a")));
     assertEquals("'a', 'b'", PrintUtils.joinQuoted(ImmutableList.of("a", "b")));
   }
+
+  /**
+   * Test for PrintUtils.printTimeNs && PrintUtils.printTimeMs
+   */
+  @Test
+  public void testPrintTime() {
+    assertEquals("", PrintUtils.printTimeNs(-1));
+    assertEquals("0ns", PrintUtils.printTimeNs(0));
+    assertEquals("1ns", PrintUtils.printTimeNs(1));
+    assertEquals("10ns", PrintUtils.printTimeNs(10));
+    assertEquals("100ns", PrintUtils.printTimeNs(100));
+    assertEquals("1.000us", PrintUtils.printTimeNs(1000));
+    assertEquals("10.000us", PrintUtils.printTimeNs(10000));
+    assertEquals("100.000us", PrintUtils.printTimeNs(100000));
+    assertEquals("1.000ms", PrintUtils.printTimeNs(1000000));
+    assertEquals("10.000ms", PrintUtils.printTimeNs(10000000));
+    assertEquals("100.000ms", PrintUtils.printTimeNs(100000000));
+    assertEquals("1s", PrintUtils.printTimeNs(1000000000));
+    assertEquals("10s", PrintUtils.printTimeNs(10000000000L));
+    assertEquals("1m40s", PrintUtils.printTimeNs(100000000000L));
+    assertEquals("16m40s", PrintUtils.printTimeNs(1000000000000L));
+    assertEquals("2h46m", PrintUtils.printTimeNs(10000000000000L));
+    assertEquals("27h46m", PrintUtils.printTimeNs(100000000000000L));
+  }
 }