Set spark.comet.metrics.detailed=true to see all available Comet metrics.
| Metric | Description |
|---|---|
scan time | Total time to scan a Parquet file. This is not comparable to the same metric in Spark because Comet's scan metric is more accurate. Although both Comet and Spark measure the time in nanoseconds, Spark rounds this time to the nearest millisecond per batch and Comet does not. |
Comet adds some additional metrics:
| Metric | Description |
|---|---|
native shuffle time | Total time in native code excluding any child operators. |
repartition time | Time to repartition batches. |
memory pool time | Time interacting with memory pool. |
encoding and compression time | Time to encode batches in IPC format and compress using ZSTD. |
Setting spark.comet.explain.native.enabled=true will cause native plans to be logged in each executor. Metrics are logged for each native plan (and there is one plan per task, so this is very verbose).
Here is a guide to some of the native metrics.
| Metric | Description |
|---|---|
elapsed_compute | Total time spent in this operator, fetching batches from a JVM iterator. |
jvm_fetch_time | Time spent in the JVM fetching input batches to be read by this ScanExec instance. |
arrow_ffi_time | Time spent using Arrow FFI to create Arrow batches from the memory addresses returned from the JVM. |
| Metric | Description |
|---|---|
elapsed_compute | Total time excluding any child operators. |
repart_time | Time to repartition batches. |
ipc_time | Time to encode batches in IPC format and compress using ZSTD. |
mempool_time | Time interacting with memory pool. |
write_time | Time spent writing bytes to disk. |