commit | 69766bca399cc779e0f2f8e859e39f7e29a17b7a | [log] [tgz] |
---|---|---|
author | Francisco Guerrero <frankgh@apache.org> | Tue Jun 27 10:03:56 2023 -0700 |
committer | Yifan Cai <ycai@apache.org> | Mon Jul 17 13:20:18 2023 -0700 |
tree | fccbc12ef127f7ce4e2ee118fa2174aef120369e | |
parent | 88faba42e5cb3f1384c92024a9c3608135d76218 [diff] |
CASSANDRA-18662: Fix cassandra-analytics-core-example This commit fixes the `SampleCassandraJob` available under the `cassandra-analytics-core-example` subproject. Fix checkstyle issues Fix serialization issue in SidecarDataTransferApi The `sidecarClient` field in `SidecarDataTransferApi` is declared as transient, this is causing NPEs coming from executors while trying to perform an SSTable upload. This commit completely avoids serializing the `dataTransferApi` field in the `CassandraBulkWriterContext`, and lazily initializing it during the `transfer()` method invocation. We guard the initialization to a single thread by making the `tranfer()` method synchronized. The `SidecarDataTransferApi` can be recreated when needed using the already serialized `clusterInfo`, `jobInfo`, and `conf` fields. Fix setting ROW_BUFFER_MODE to BUFFERED patch by Francisco Guerrero; reviewed by Dinesh Joshi, Yifan Cai for CASSANDRA-18662
The open-source repository for the Cassandra Spark Bulk Reader. This library allows integration between Cassandra and Spark job, allowing users to run arbitrary Spark jobs against a Cassandra cluster securely and consistently.
This project contains the necessary open-source implementations to connect to a Cassandra cluster and read the data into Spark.
For example usage, see the example repository; sample steps:
import org.apache.cassandra.spark.sparksql.CassandraDataSource import org.apache.spark.sql.SparkSession val sparkSession = SparkSession.builder.getOrCreate() val df = sparkSession.read.format("org.apache.cassandra.spark.sparksql.CassandraDataSource") .option("sidecar_instances", "localhost,localhost2,localhost3") .option("keyspace", "sbr_tests") .option("table", "basic_test") .option("DC", "datacenter1") .option("createSnapshot", true) .option("numCores", 4) .load()
The Cassandra Spark Bulk Writer allows for high-speed data ingest to Cassandra clusters running Cassandra 3.0 and 4.0.
If you are a consumer of the Cassandra Spark Bulk Writer, please see our end-user documentation: usage instructions, FAQs, troubleshooting guides, and release notes.
Developers interested in contributing to the SBW, please see the DEV-README.
For example usage, see the example repository. This example covers both setting up Cassandra 4.0, Apache Sidecar, and running a Spark Bulk Reader and Spark Bulk Writer job.