| commit | 59bd1d7106568ee48465782c15b1cc13f29aa377 | [log] [tgz] |
|---|---|---|
| author | Štefan Miklošovič <smiklosovic@apache.org> | Mon Nov 10 19:15:28 2025 +0100 |
| committer | GitHub <noreply@github.com> | Mon Nov 10 10:15:28 2025 -0800 |
| tree | bafec70ab70550fc1a469e60f279863e9948d1b7 | |
| parent | 8c8115656d3925878b44628b224677e426ec702e [diff] |
CASSANALYTICS-103 update Guice to 7.0.0 (#154) Patch by Stefan Miklosovic; reviewed by Francisco Guerrero, Bernardo Botella for CASSANALYTICS-103
The open-source repository for the Cassandra Spark Bulk Reader. This library allows integration between Cassandra and Spark job, allowing users to run arbitrary Spark jobs against a Cassandra cluster securely and consistently.
This project contains the necessary open-source implementations to connect to a Cassandra cluster and read the data into Spark.
For example usage, see the example repository; sample steps:
import org.apache.cassandra.spark.sparksql.CassandraDataSource import org.apache.spark.sql.SparkSession val sparkSession = SparkSession.builder.getOrCreate() val df = sparkSession.read.format("org.apache.cassandra.spark.sparksql.CassandraDataSource") .option("sidecar_contact_points", "localhost,localhost2,localhost3") .option("keyspace", "sbr_tests") .option("table", "basic_test") .option("DC", "datacenter1") .option("createSnapshot", true) .option("numCores", 4) .load()
The Cassandra Spark Bulk Writer allows for high-speed data ingest to Cassandra clusters running Cassandra 3.0 and 4.0.
Developers interested in contributing to the Analytics library, please see the DEV-README.
For example usage, see the example repository. This example covers both setting up Cassandra 4.0, Apache Sidecar, and running a Spark Bulk Reader and Spark Bulk Writer job.
Contributions are welcome!
Please join us on #cassandra-dev in ASF Slack.
Issues are tracked in JIRA.