Comet Metrics

Spark SQL Metrics

Set spark.comet.metrics.detailed=true to see all available Comet metrics.

CometScanExec

MetricDescription
scan timeTotal time to scan a Parquet file. This is not comparable to the same metric in Spark because Comet's scan metric is more accurate. Although both Comet and Spark measure the time in nanoseconds, Spark rounds this time to the nearest millisecond per batch and Comet does not.

Exchange

Comet adds some additional metrics:

MetricDescription
native shuffle timeTotal time in native code excluding any child operators.
repartition timeTime to repartition batches.
memory pool timeTime interacting with memory pool.
encoding and compression timeTime to encode batches in IPC format and compress using ZSTD.

Native Metrics

Setting spark.comet.explain.native.enabled=true will cause native plans to be logged in each executor. Metrics are logged for each native plan (and there is one plan per task, so this is very verbose).

Here is a guide to some of the native metrics.

ScanExec

MetricDescription
elapsed_computeTotal time spent in this operator, fetching batches from a JVM iterator.
jvm_fetch_timeTime spent in the JVM fetching input batches to be read by this ScanExec instance.
arrow_ffi_timeTime spent using Arrow FFI to create Arrow batches from the memory addresses returned from the JVM.

ShuffleWriterExec

MetricDescription
elapsed_computeTotal time excluding any child operators.
repart_timeTime to repartition batches.
ipc_timeTime to encode batches in IPC format and compress using ZSTD.
mempool_timeTime interacting with memory pool.
write_timeTime spent writing bytes to disk.