tree: 99d114ece2568d43ab9247e6156d6408d42900e8 [path history] [tgz]
  1. examples/
  2. src/
  3. pom.xml
  4. README.md
sql-streaming-sqs/README.md

Spark SQL Streaming Amazon SQS Data Source

A library for reading data from Amzon S3 with optimised listing using Amazon SQS using Spark SQL Streaming ( or Structured streaming.).

Linking

Using SBT:

libraryDependencies += "org.apache.bahir" %% "spark-sql-streaming-sqs" % "{{site.SPARK_VERSION}}"

Using Maven:

<dependency>
    <groupId>org.apache.bahir</groupId>
    <artifactId>spark-sql-streaming-sqs_{{site.SCALA_BINARY_VERSION}}</artifactId>
    <version>{{site.SPARK_VERSION}}</version>
</dependency>

This library can also be added to Spark jobs launched through spark-shell or spark-submit by using the --packages command line option. For example, to include it when starting the spark shell:

$ bin/spark-shell --packages org.apache.bahir:spark-sql-streaming-sqs_{{site.SCALA_BINARY_VERSION}}:{{site.SPARK_VERSION}}

Unlike using --jars, using --packages ensures that this library and its dependencies will be added to the classpath. The --packages argument can also be used with bin/spark-submit.

This library is compiled for Scala 2.12 only, and intends to support Spark 2.4.0 onwards.

Configuration options

The configuration is obtained from parameters.

NameDefaultMeaning
sqsUrlrequired, no default valuesqs queue url, like ‘https://sqs.us-east-1.amazonaws.com/330183209093/TestQueue
regionrequired, no default valueAWS region where queue is created
fileFormatrequired, no default valuefile format for the s3 files stored on Amazon S3
schemarequired, no default valueschema of the data being read
sqsFetchIntervalSeconds10time interval (in seconds) after which to fetch messages from Amazon SQS queue
sqsLongPollingWaitTimeSeconds20wait time (in seconds) for long polling on Amazon SQS queue
sqsMaxConnections1number of parallel threads to connect to Amazon SQS queue
sqsMaxRetries10Maximum number of consecutive retries in case of a connection failure to SQS before giving up
ignoreFileDeletionfalsewhether to ignore any File deleted message in SQS queue
fileNameOnlyfalseWhether to check new files based on only the filename instead of on the full path
shouldSortFilestruewhether to sort files based on timestamp while listing them from SQS
useInstanceProfileCredentialsfalseWhether to use EC2 instance profile credentials for connecting to Amazon SQS
maxFilesPerTriggerno default valuemaximum number of files to process in a microbatch
maxFileAge7dMaximum age of a file that can be found in this directory

Example

An example to create a SQL stream which uses Amazon SQS to list files on S3,

    val inputDf = sparkSession
                      .readStream
                      .format("s3-sqs")
                      .schema(schema)
                      .option("sqsUrl", queueUrl)
                      .option("fileFormat", "json")
                      .option("sqsFetchIntervalSeconds", "2")
                      .option("sqsLongPollingWaitTimeSeconds", "5")
                      .option("useInstanceProfileCredentials", "true")
                      .load()