[#1572] fix(spark): Exceptions might be discarded when spilling buffers (#1573)

### What changes were proposed in this pull request?

Handle all the exceptions when spilling buffers.
Let users know when a TimeoutException occurs.

### Why are the changes needed?

Fix https://github.com/apache/incubator-uniffle/issues/1572.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Existing UTs.
diff --git a/client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java b/client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java
index 6c4c41a..d826104 100644
--- a/client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java
+++ b/client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java
@@ -482,6 +482,9 @@
     } catch (TimeoutException timeoutException) {
       // A best effort strategy to wait.
       // If timeout exception occurs, the underlying tasks won't be cancelled.
+      LOG.warn("[taskId: {}] Spill tasks timeout after {} seconds", taskId, memorySpillTimeoutSec);
+    } catch (Exception e) {
+      LOG.warn("[taskId: {}] Failed to spill buffers due to ", taskId, e);
     } finally {
       long releasedSize =
           futures.stream()