layout: global title: Examples type: “page singular” navigation: weight: 4 show: true

These examples give a quick overview of the Spark API. Spark is built on the concept of distributed datasets, which contain arbitrary Java or Python objects. You create a dataset from external data, then apply parallel operations to it. The building block of the Spark API is its RDD API. In the RDD API, there are two types of operations: transformations, which define a new dataset based on previous ones, and actions, which kick off a job to execute on a cluster. On top of Spark’s RDD API, high level APIs are provided, e.g. DataFrame API and Machine Learning API. These high level APIs provide a concise way to conduct certain data operations. In this page, we will show examples using RDD API as well as examples using high level APIs.

count = sc.parallelize(range(0, NUM_SAMPLES))
.filter(inside).count() print(“Pi is roughly %f” % (4.0 * count / NUM_SAMPLES)) {% endhighlight %}

long count = sc.parallelize(l).filter(i -> { double x = Math.random(); double y = Math.random(); return xx + yy < 1; }).count(); System.out.println("Pi is roughly " + 4.0 * count / NUM_SAMPLES); {% endhighlight %}

Creates a DataFrame having a single column named “line”

df = textFile.map(lambda r: Row(r)).toDF([“line”]) errors = df.filter(col(“line”).like(“%ERROR%”))

Counts all the errors

errors.count()

Counts errors mentioning MySQL

errors.filter(col(“line”).like(“%MySQL%”)).count()

Fetches the MySQL errors as an array of strings

errors.filter(col(“line”).like(“%MySQL%”)).collect() {% endhighlight %}

// Creates a DataFrame having a single column named “line” val df = textFile.toDF(“line”) val errors = df.filter(col(“line”).like(“%ERROR%”)) // Counts all the errors errors.count() // Counts errors mentioning MySQL errors.filter(col(“line”).like(“%MySQL%”)).count() // Fetches the MySQL errors as an array of strings errors.filter(col(“line”).like(“%MySQL%”)).collect() {% endhighlight %}

DataFrame errors = df.filter(col(“line”).like(“%ERROR%”)); // Counts all the errors errors.count(); // Counts errors mentioning MySQL errors.filter(col(“line”).like(“%MySQL%”)).count(); // Fetches the MySQL errors as an array of strings errors.filter(col(“line”).like(“%MySQL%”)).collect(); {% endhighlight %}

Looks the schema of this DataFrame.

df.printSchema()

Counts people by age

countsByAge = df.groupBy(“age”).count() countsByAge.show()

Saves countsByAge to S3 in the JSON format.

countsByAge.write.format(“json”).save(“s3a://...”) {% endhighlight %}

// Looks the schema of this DataFrame. df.printSchema()

// Counts people by age val countsByAge = df.groupBy(“age”).count() countsByAge.show()

// Saves countsByAge to S3 in the JSON format. countsByAge.write.format(“json”).save(“s3a://...”) {% endhighlight %}

// Looks the schema of this DataFrame. df.printSchema();

// Counts people by age DataFrame countsByAge = df.groupBy(“age”).count(); countsByAge.show();

// Saves countsByAge to S3 in the JSON format. countsByAge.write().format(“json”).save(“s3a://...”); {% endhighlight %}

Set parameters for the algorithm.

Here, we limit the number of iterations to 10.

lr = LogisticRegression(maxIter=10)

Fit the model to the data.

model = lr.fit(df)

Given a dataset, predict each point's label, and show the results.

model.transform(df).show() {% endhighlight %}

// Set parameters for the algorithm. // Here, we limit the number of iterations to 10. val lr = new LogisticRegression().setMaxIter(10)

// Fit the model to the data. val model = lr.fit(df)

// Inspect the model: get the feature weights. val weights = model.weights

// Given a dataset, predict each point's label, and show the results. model.transform(df).show() {% endhighlight %}

// Set parameters for the algorithm. // Here, we limit the number of iterations to 10. LogisticRegression lr = new LogisticRegression().setMaxIter(10);

// Fit the model to the data. LogisticRegressionModel model = lr.fit(df);

// Inspect the model: get the feature weights. Vector weights = model.weights();

// Given a dataset, predict each point's label, and show the results. model.transform(df).show(); {% endhighlight %}

Many additional examples are distributed with Spark: