commit | 46c35d0ef2efb66512133a7913df9936b0a80dc8 | [log] [tgz] |
---|---|---|
author | Francisco Guerrero <frankgh@apache.org> | Mon Feb 19 20:50:16 2024 -0800 |
committer | Francisco Guerrero <frankgh@apache.org> | Wed Feb 21 16:17:22 2024 -0800 |
tree | b55a84e57174e811d003f3ae7223ec73b6ea7d41 | |
parent | dc0e79b9c483562ec0920d69e886715eb329c426 [diff] |
CASSANDRA-19411: Bulk reader fails to produce a row when regular column values are null Bulk Reader won't emit a row when the regular column values are all `null`. For example, a schema `PK` = `a`, `b` ; `CK` = `c`, `d` ; and columns = `e`, `f`. | a | b | c | d | e | f | | --- | --- | --- | --- | ---- | ---- | | pk1 | pk2 | ck1 | ck2 | null | null | When queried from Analytics bulk reader, it won't produce a row. This issue also occurs when the projected regular column values are all `null`, where other non-projected columns might have some values. Patch by Francisco Guerrero; Reviewed by Yifan Cai for CASSANDRA-19411
The open-source repository for the Cassandra Spark Bulk Reader. This library allows integration between Cassandra and Spark job, allowing users to run arbitrary Spark jobs against a Cassandra cluster securely and consistently.
This project contains the necessary open-source implementations to connect to a Cassandra cluster and read the data into Spark.
For example usage, see the example repository; sample steps:
import org.apache.cassandra.spark.sparksql.CassandraDataSource import org.apache.spark.sql.SparkSession val sparkSession = SparkSession.builder.getOrCreate() val df = sparkSession.read.format("org.apache.cassandra.spark.sparksql.CassandraDataSource") .option("sidecar_instances", "localhost,localhost2,localhost3") .option("keyspace", "sbr_tests") .option("table", "basic_test") .option("DC", "datacenter1") .option("createSnapshot", true) .option("numCores", 4) .load()
The Cassandra Spark Bulk Writer allows for high-speed data ingest to Cassandra clusters running Cassandra 3.0 and 4.0.
Developers interested in contributing to the Analytics library, please see the DEV-README.
For example usage, see the example repository. This example covers both setting up Cassandra 4.0, Apache Sidecar, and running a Spark Bulk Reader and Spark Bulk Writer job.