{% include JB/setup %} A library for reading data from Google Cloud Pub/Sub using Spark Streaming.
Using SBT:
libraryDependencies += "org.apache.bahir" %% "spark-streaming-pubsub" % "{{site.SPARK_VERSION}}"
Using Maven:
<dependency> <groupId>org.apache.bahir</groupId> <artifactId>spark-streaming-pubsub_{{site.SCALA_BINARY_VERSION}}</artifactId> <version>{{site.SPARK_VERSION}}</version> </dependency>
This library can also be added to Spark jobs launched through spark-shell
or spark-submit
by using the --packages
command line option. For example, to include it when starting the spark shell:
$ bin/spark-shell --packages org.apache.bahir:spark-streaming-pubsub_{{site.SCALA_BINARY_VERSION}}:{{site.SPARK_VERSION}}
Unlike using --jars
, using --packages
ensures that this library and its dependencies will be added to the classpath. The --packages
argument can also be used with bin/spark-submit
.
First you need to create credential by SparkGCPCredentials, it support four type of credentials
SparkGCPCredentials.builder.build()
SparkGCPCredentials.builder.jsonServiceAccount(PATH_TO_JSON_KEY).build()
SparkGCPCredentials.builder.p12ServiceAccount(PATH_TO_P12_KEY, EMAIL_ACCOUNT).build()
SparkGCPCredentials.builder.metadataServiceAccount().build()
val lines = PubsubUtils.createStream(ssc, projectId, subscriptionName, credential, ..)
JavaDStream<SparkPubsubMessage> lines = PubsubUtils.createStream(jssc, projectId, subscriptionName, credential...)
See end-to-end examples at Google Cloud Pubsub Examples