commit | 6123c8b3229525536518d181de363c22a890f0f6 | [log] [tgz] |
---|---|---|
author | Dongjoon Hyun <dongjoon@apache.org> | Thu Apr 03 21:09:47 2025 +0900 |
committer | Dongjoon Hyun <dongjoon@apache.org> | Thu Apr 03 21:09:47 2025 +0900 |
tree | d4ca26f73632402ea6fd80e64e08e305074b2356 | |
parent | f073e56826a184e8cb5d3f510e84410a1f88040e [diff] |
[SPARK-51708] Add `CaseInsensitiveDictionary` ### What changes were proposed in this pull request? This PR aims to add `CaseInsensitiveDictionary` and use it in the following classes. - `DataFrameReader` - `DataFrameWriter` ### Why are the changes needed? For feature parity. ### Does this PR introduce _any_ user-facing change? No, this is a new addition to the unreleased version. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #40 from dongjoon-hyun/SPARK-51708. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
This is an experimental Swift library to show how to connect to a remote Apache Spark Connect Server and run SQL statements to manipulate remote data.
So far, this library project is tracking the upstream changes like the Apache Spark 4.0.0 RC2 release and Apache Arrow project's Swift-support.
Create a Swift project.
$ mkdir SparkConnectSwiftApp $ cd SparkConnectSwiftApp $ swift package init --name SparkConnectSwiftApp --type executable
Add SparkConnect
package to the dependency like the following
$ cat Package.swift import PackageDescription let package = Package( name: "SparkConnectSwiftApp", platforms: [ .macOS(.v15) ], dependencies: [ .package(url: "https://github.com/apache/spark-connect-swift.git", branch: "main") ], targets: [ .executableTarget( name: "SparkConnectSwiftApp", dependencies: [.product(name: "SparkConnect", package: "spark-connect-swift")] ) ] )
Use SparkSession
of SparkConnect
module in Swift.
$ cat Sources/main.swift import SparkConnect let spark = try await SparkSession.builder.getOrCreate() print("Connected to Apache Spark \(await spark.version) Server") let statements = [ "DROP TABLE IF EXISTS t", "CREATE TABLE IF NOT EXISTS t(a INT)", "INSERT INTO t VALUES (1), (2), (3)", ] for s in statements { print("EXECUTE: \(s)") _ = try await spark.sql(s).count() } print("SELECT * FROM t") try await spark.sql("SELECT * FROM t").show() await spark.stop()
Run your Swift application.
$ swift run ... Connected to Apache Spark 4.0.0 Server EXECUTE: DROP TABLE IF EXISTS t EXECUTE: CREATE TABLE IF NOT EXISTS t(a INT) EXECUTE: INSERT INTO t VALUES (1), (2), (3) SELECT * FROM t +---+ | a | +---+ | 2 | | 1 | | 3 | +---+
You can find this example in the following repository.