commit | 0aaf5659028dd874c8d666c636f11eae63c429e6 | [log] [tgz] |
---|---|---|
author | Arjun Ashok <arjun_ashok@apple.com> | Mon Oct 09 07:53:40 2023 -0700 |
committer | Francisco Guerrero <frankgh@apache.org> | Wed Dec 20 09:05:16 2023 -0800 |
tree | 9c0b5af760de534b5a5fb3732543459215a8d5f8 | |
parent | 672d66a64a21e23c4d81c089b426360c2bb708b7 [diff] |
CASSANDRA-18852 - Changes to make bulk writer resilient to cluster resize operations Patch by Arjun Ashok, Saranya Krishnakumar; Reviewed by Yifan Cai, Francisco Guerrero, Doug Rohrer for CASSANDRA-18852 Co-authored-by: Arjun Ashok <arjun_ashok@apple.com> Co-authored-by: Saranya Krishnakumar <saranya_k@apple.com>
The open-source repository for the Cassandra Spark Bulk Reader. This library allows integration between Cassandra and Spark job, allowing users to run arbitrary Spark jobs against a Cassandra cluster securely and consistently.
This project contains the necessary open-source implementations to connect to a Cassandra cluster and read the data into Spark.
For example usage, see the example repository; sample steps:
import org.apache.cassandra.spark.sparksql.CassandraDataSource import org.apache.spark.sql.SparkSession val sparkSession = SparkSession.builder.getOrCreate() val df = sparkSession.read.format("org.apache.cassandra.spark.sparksql.CassandraDataSource") .option("sidecar_instances", "localhost,localhost2,localhost3") .option("keyspace", "sbr_tests") .option("table", "basic_test") .option("DC", "datacenter1") .option("createSnapshot", true) .option("numCores", 4) .load()
The Cassandra Spark Bulk Writer allows for high-speed data ingest to Cassandra clusters running Cassandra 3.0 and 4.0.
Developers interested in contributing to the Analytics library, please see the DEV-README.
For example usage, see the example repository. This example covers both setting up Cassandra 4.0, Apache Sidecar, and running a Spark Bulk Reader and Spark Bulk Writer job.