commit | 82b3c0a79c9322142738a4ec2ff7d4d4c0be2370 | [log] [tgz] |
---|---|---|
author | Francisco Guerrero <frankgh@apache.org> | Tue Jul 25 12:41:10 2023 -0700 |
committer | Yifan Cai <ycai@apache.org> | Tue Aug 08 14:41:26 2023 -0700 |
tree | 52e92042829f1109d46d976b76af4ec93fdff126 | |
parent | 6f8f404535d4cff9272091f669f985ce11cee7d2 [diff] |
CASSANDRA-18692 Fix bulk writes with Buffered RowBufferMode When setting Buffered RowBufferMode as part of the `WriterOption`s, `org.apache.cassandra.spark.bulkwriter.RecordWriter` ignores that configuration and instead uses the batch size to determine when to finalize an SSTable and start writing a new SSTable, if more rows are available. In this commit, we fix `org.apache.cassandra.spark.bulkwriter.RecordWriter#checkBatchSize` to take into account the configured `RowBufferMode`. And in specific to the case of the `UNBUFFERED` RowBufferMode, we check then the batchSize of the SSTable during writes, and for the case of `BUFFERED` that check will take no effect. Co-authored-by: Doug Rohrer <doug@therohrers.org> Patch by Francisco Guerrero, Doug Rohrer; Reviewed by Dinesh Joshi, Yifan Cai for CASSANDRA-18692
The open-source repository for the Cassandra Spark Bulk Reader. This library allows integration between Cassandra and Spark job, allowing users to run arbitrary Spark jobs against a Cassandra cluster securely and consistently.
This project contains the necessary open-source implementations to connect to a Cassandra cluster and read the data into Spark.
For example usage, see the example repository; sample steps:
import org.apache.cassandra.spark.sparksql.CassandraDataSource import org.apache.spark.sql.SparkSession val sparkSession = SparkSession.builder.getOrCreate() val df = sparkSession.read.format("org.apache.cassandra.spark.sparksql.CassandraDataSource") .option("sidecar_instances", "localhost,localhost2,localhost3") .option("keyspace", "sbr_tests") .option("table", "basic_test") .option("DC", "datacenter1") .option("createSnapshot", true) .option("numCores", 4) .load()
The Cassandra Spark Bulk Writer allows for high-speed data ingest to Cassandra clusters running Cassandra 3.0 and 4.0.
If you are a consumer of the Cassandra Spark Bulk Writer, please see our end-user documentation: usage instructions, FAQs, troubleshooting guides, and release notes.
Developers interested in contributing to the SBW, please see the DEV-README.
For example usage, see the example repository. This example covers both setting up Cassandra 4.0, Apache Sidecar, and running a Spark Bulk Reader and Spark Bulk Writer job.