[SPARK-52183] Update `SparkSQLRepl` example to show up to 10k rows

### What changes were proposed in this pull request?

This PR aims to update `SparkSQLRepl` example to show up to 10k rows.

### Why are the changes needed?

Currently, `SparkSQLRepl` uses the `show()` with the default parameters. Although we cannot handle large-numbers of rows due to `grpc_max_message_size`, we had better have more reasonable default value.

```SQL
spark-sql (default)> SELECT * FROM RANGE(21);
+---+
| id|
+---+
|  0|
|...|
| 19|
+---+
only showing top 20 rows
Time taken: 118 ms
```

### Does this PR introduce _any_ user-facing change?

This is an example.

### How was this patch tested?

Manual test.

```SQL
$ swift run
spark-sql (default)> SELECT * FROM RANGE(10001);
...
only showing top 10000 rows
Time taken: 142 ms
```

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #159 from dongjoon-hyun/SPARK-52183.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
diff --git a/Sources/SparkSQLRepl/main.swift b/Sources/SparkSQLRepl/main.swift
index 6c45ae0..a5c6c5d 100644
--- a/Sources/SparkSQLRepl/main.swift
+++ b/Sources/SparkSQLRepl/main.swift
@@ -46,7 +46,7 @@
       break
     default:
       do {
-        try await spark.time(spark.sql(String(match.1)).show)
+        try await spark.time({ try await spark.sql(String(match.1)).show(10000, false) })
       } catch {
         print("Error: \(error)")
       }