commit | c00c454d698e5a29caf58e61ed52ab48d08fd7fe | [log] [tgz] |
---|---|---|
author | Francisco Guerrero <frankgh@apache.org> | Mon Apr 01 12:11:52 2024 -0700 |
committer | GitHub <noreply@github.com> | Mon Apr 01 12:11:52 2024 -0700 |
tree | dfb15caa1922ec91a9e7cbb3b4d8eec1eccede98 | |
parent | d28442ae712c1597052493aa3d2353a2de2495c2 [diff] |
CASSANDRA-19507 Fix bulk reads of multiple tables that potentially have the same data file name (#47) When reading multiple data frames using bulk reader from different tables, it is possible to encounter a data file name being retrieved from the same Sidecar instance. Because the `SSTable`s are cached in the `SSTableCache`, it is possible that the `org.apache.cassandra.spark.reader.SSTableReader` uses the incorrect `SSTable` if it was cached with the same `#hashCode`. In this patch, the equality takes into account the keyspace, table, and snapshot name. Additionally, we implement the `hashCode` and `equals` method in `org.apache.cassandra.clients.SidecarInstanceImpl` to utilize the `SSTableCache` correctly. Once the methods are implemented, the issue originally described in JIRA is surfaced. Patch by Francisco Guerrero; Reviewed by Yifan Cai for CASSANDRA-19507
The open-source repository for the Cassandra Spark Bulk Reader. This library allows integration between Cassandra and Spark job, allowing users to run arbitrary Spark jobs against a Cassandra cluster securely and consistently.
This project contains the necessary open-source implementations to connect to a Cassandra cluster and read the data into Spark.
For example usage, see the example repository; sample steps:
import org.apache.cassandra.spark.sparksql.CassandraDataSource import org.apache.spark.sql.SparkSession val sparkSession = SparkSession.builder.getOrCreate() val df = sparkSession.read.format("org.apache.cassandra.spark.sparksql.CassandraDataSource") .option("sidecar_instances", "localhost,localhost2,localhost3") .option("keyspace", "sbr_tests") .option("table", "basic_test") .option("DC", "datacenter1") .option("createSnapshot", true) .option("numCores", 4) .load()
The Cassandra Spark Bulk Writer allows for high-speed data ingest to Cassandra clusters running Cassandra 3.0 and 4.0.
Developers interested in contributing to the Analytics library, please see the DEV-README.
For example usage, see the example repository. This example covers both setting up Cassandra 4.0, Apache Sidecar, and running a Spark Bulk Reader and Spark Bulk Writer job.