Apache cassandra

Clone this repo:

Branches

  1. 75d97e4 CASSANALYTICS-7: Add Apache Cassandra Sidecar CDC implementation (#99) by jberragan · 7 hours ago trunk
  2. 585d6b5 CASSANALYTICS-16: Bump Cassandra Sidecar version to 0.1.0 (#103) by Yifan Cai · 3 days ago
  3. 8ed7809 CASSANALYTICS-17: Remove JDK8 support (#104) by Yifan Cai · 3 days ago
  4. 5dcb065 CASSANALYTICS-18: Support CQL duration type (#102) by Lukasz Antoniak · 4 days ago
  5. c943b2c CASSANALYTICS-8: Add Schema Store interfaces for CDC (#100) by Bernardo Botella · 5 weeks ago

Cassandra Analytics

Cassandra Spark Bulk Reader

The open-source repository for the Cassandra Spark Bulk Reader. This library allows integration between Cassandra and Spark job, allowing users to run arbitrary Spark jobs against a Cassandra cluster securely and consistently.

This project contains the necessary open-source implementations to connect to a Cassandra cluster and read the data into Spark.

For example usage, see the example repository; sample steps:

import org.apache.cassandra.spark.sparksql.CassandraDataSource
import org.apache.spark.sql.SparkSession

val sparkSession = SparkSession.builder.getOrCreate()
val df = sparkSession.read.format("org.apache.cassandra.spark.sparksql.CassandraDataSource")
                          .option("sidecar_contact_points", "localhost,localhost2,localhost3")
                          .option("keyspace", "sbr_tests")
                          .option("table", "basic_test")
                          .option("DC", "datacenter1")
                          .option("createSnapshot", true)
                          .option("numCores", 4)
                          .load()

Cassandra Spark Bulk Writer

The Cassandra Spark Bulk Writer allows for high-speed data ingest to Cassandra clusters running Cassandra 3.0 and 4.0.

Developers interested in contributing to the Analytics library, please see the DEV-README.

Getting Started

For example usage, see the example repository. This example covers both setting up Cassandra 4.0, Apache Sidecar, and running a Spark Bulk Reader and Spark Bulk Writer job.