commit | aea798dc7e517af520a403d4d86f3bc6bed65092 | [log] [tgz] |
---|---|---|
author | Yifan Cai <52585731+yifan-c@users.noreply.github.com> | Mon Apr 22 15:46:08 2024 -0700 |
committer | GitHub <noreply@github.com> | Mon Apr 22 15:46:08 2024 -0700 |
tree | 555556b6213a564b4cab109ae83cf8bb8475f41b | |
parent | 690101840d4d8f9c656bb0ca114f6619af80e1cf [diff] |
CASSANDRA-19563: Support bulk write via S3 (#53) This commit adds a configuration (writer) option to pick a transport other than the previously-implemented "direct upload to all sidecars" (now known as the "Direct" transport). The second transport, now being implemented, is the "S3_COMPAT" transport, which allows the job to upload the generated SSTables to an S3-compatible storage system, and then inform the Cassandra Sidecar that those files are available for download & commit. Additionally, a plug-in system was added to allow communications between custom transport hooks and the job, so the custom hook can provide updated credentials and out-of-band status updates on S3-related issues. Co-Authored-By: Yifan Cai <ycai@apache.org> Co-Authored-By: Doug Rohrer <drohrer@apple.com> Co-Authored-By: Francisco Guerrero <frankgh@apache.org> Co-Authored-By: Saranya Krishnakumar <saranya_k@apple.com> Patch by Yifan Cai, Doug Rohrer, Francisco Guerrero, Saranya Krishnakumar; Reviewed by Francisco Guerrero for CASSANDRA-19563
The open-source repository for the Cassandra Spark Bulk Reader. This library allows integration between Cassandra and Spark job, allowing users to run arbitrary Spark jobs against a Cassandra cluster securely and consistently.
This project contains the necessary open-source implementations to connect to a Cassandra cluster and read the data into Spark.
For example usage, see the example repository; sample steps:
import org.apache.cassandra.spark.sparksql.CassandraDataSource import org.apache.spark.sql.SparkSession val sparkSession = SparkSession.builder.getOrCreate() val df = sparkSession.read.format("org.apache.cassandra.spark.sparksql.CassandraDataSource") .option("sidecar_instances", "localhost,localhost2,localhost3") .option("keyspace", "sbr_tests") .option("table", "basic_test") .option("DC", "datacenter1") .option("createSnapshot", true) .option("numCores", 4) .load()
The Cassandra Spark Bulk Writer allows for high-speed data ingest to Cassandra clusters running Cassandra 3.0 and 4.0.
Developers interested in contributing to the Analytics library, please see the DEV-README.
For example usage, see the example repository. This example covers both setting up Cassandra 4.0, Apache Sidecar, and running a Spark Bulk Reader and Spark Bulk Writer job.