Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as distributed SQL query engine.
Spark SQL can also be used to read data from an existing Hive installation. For more on how to configure this feature, please refer to the Hive Tables section.
A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.
The DataFrame API is available in Scala, Java, Python, and R.
All of the examples on this page use sample data included in the Spark distribution and can be run in the spark-shell
, pyspark
shell, or sparkR
shell.
The entry point into all functionality in Spark SQL is the SQLContext
class, or one of its descendants. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight scala %} val sc: SparkContext // An existing SparkContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// this is used to implicitly convert an RDD to a DataFrame. import sqlContext.implicits._ {% endhighlight %}
The entry point into all functionality in Spark SQL is the SQLContext
class, or one of its descendants. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight java %} JavaSparkContext sc = ...; // An existing JavaSparkContext. SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc); {% endhighlight %}
The entry point into all relational functionality in Spark is the SQLContext
class, or one of its decedents. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight python %} from pyspark.sql import SQLContext sqlContext = SQLContext(sc) {% endhighlight %}
The entry point into all relational functionality in Spark is the SQLContext
class, or one of its decedents. To create a basic SQLContext
, all you need is a SparkContext.
{% highlight r %} sqlContext <- sparkRSQL.init(sc) {% endhighlight %}
In addition to the basic SQLContext
, you can also create a HiveContext
, which provides a superset of the functionality provided by the basic SQLContext
. Additional features include the ability to write queries using the more complete HiveQL parser, access to Hive UDFs, and the ability to read data from Hive tables. To use a HiveContext
, you do not need to have an existing Hive setup, and all of the data sources available to a SQLContext
are still available. HiveContext
is only packaged separately to avoid including all of Hive's dependencies in the default Spark build. If these dependencies are not a problem for your application then using HiveContext
is recommended for the 1.3 release of Spark. Future releases will focus on bringing SQLContext
up to feature parity with a HiveContext
.
The specific variant of SQL that is used to parse queries can also be selected using the spark.sql.dialect
option. This parameter can be changed using either the setConf
method on a SQLContext
or by using a SET key=value
command in SQL. For a SQLContext
, the only dialect available is “sql” which uses a simple SQL parser provided by Spark SQL. In a HiveContext
, the default is “hiveql”, though “sql” is also available. Since the HiveQL parser is much more complete, this is recommended for most use cases.
With a SQLContext
, applications can create DataFrame
s from an existing RDD
, from a Hive table, or from data sources.
As an example, the following creates a DataFrame
based on the content of a JSON file:
val df = sqlContext.read.json(“examples/src/main/resources/people.json”)
// Displays the content of the DataFrame to stdout df.show() {% endhighlight %}
DataFrame df = sqlContext.read().json(“examples/src/main/resources/people.json”);
// Displays the content of the DataFrame to stdout df.show(); {% endhighlight %}
df = sqlContext.read.json(“examples/src/main/resources/people.json”)
df.show() {% endhighlight %}
df <- jsonFile(sqlContext, “examples/src/main/resources/people.json”)
showDF(df) {% endhighlight %}
DataFrames provide a domain-specific language for structured data manipulation in Scala, Java, and Python.
Here we include some basic examples of structured data processing using DataFrames:
// Create the DataFrame val df = sqlContext.read.json(“examples/src/main/resources/people.json”)
// Show the content of the DataFrame df.show() // age name // null Michael // 30 Andy // 19 Justin
// Print the schema in a tree format df.printSchema() // root // |-- age: long (nullable = true) // |-- name: string (nullable = true)
// Select only the “name” column df.select(“name”).show() // name // Michael // Andy // Justin
// Select everybody, but increment the age by 1 df.select(df(“name”), df(“age”) + 1).show() // name (age + 1) // Michael null // Andy 31 // Justin 20
// Select people older than 21 df.filter(df(“age”) > 21).show() // age name // 30 Andy
// Count people by age df.groupBy(“age”).count().show() // age count // null 1 // 19 1 // 30 1 {% endhighlight %}
For a complete list of the types of operations that can be performed on a DataFrame refer to the API Documentation.
In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the DataFrame Function Reference.
// Create the DataFrame DataFrame df = sqlContext.read().json(“examples/src/main/resources/people.json”);
// Show the content of the DataFrame df.show(); // age name // null Michael // 30 Andy // 19 Justin
// Print the schema in a tree format df.printSchema(); // root // |-- age: long (nullable = true) // |-- name: string (nullable = true)
// Select only the “name” column df.select(“name”).show(); // name // Michael // Andy // Justin
// Select everybody, but increment the age by 1 df.select(df.col(“name”), df.col(“age”).plus(1)).show(); // name (age + 1) // Michael null // Andy 31 // Justin 20
// Select people older than 21 df.filter(df.col(“age”).gt(21)).show(); // age name // 30 Andy
// Count people by age df.groupBy(“age”).count().show(); // age count // null 1 // 19 1 // 30 1 {% endhighlight %}
For a complete list of the types of operations that can be performed on a DataFrame refer to the API Documentation.
In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the DataFrame Function Reference.
{% highlight python %} from pyspark.sql import SQLContext sqlContext = SQLContext(sc)
df = sqlContext.read.json(“examples/src/main/resources/people.json”)
df.show()
df.printSchema()
df.select(“name”).show()
df.select(df[‘name’], df[‘age’] + 1).show()
df.filter(df[‘age’] > 21).show()
df.groupBy(“age”).count().show()
{% endhighlight %}
For a complete list of the types of operations that can be performed on a DataFrame refer to the API Documentation.
In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the DataFrame Function Reference.
df <- jsonFile(sqlContext, “examples/src/main/resources/people.json”)
showDF(df)
printSchema(df)
showDF(select(df, “name”))
showDF(select(df, df$name, df$age + 1))
showDF(where(df, df$age > 21))
showDF(count(groupBy(df, “age”)))
{% endhighlight %}
For a complete list of the types of operations that can be performed on a DataFrame refer to the API Documentation.
In addition to simple column references and expressions, DataFrames also have a rich library of functions including string manipulation, date arithmetic, common math operations and more. The complete list is available in the DataFrame Function Reference.
The sql
function on a SQLContext
enables applications to run SQL queries programmatically and returns the result as a DataFrame
.
Spark SQL supports two different methods for converting existing RDDs into DataFrames. The first method uses reflection to infer the schema of an RDD that contains specific types of objects. This reflection based approach leads to more concise code and works well when you already know the schema while writing your Spark application.
The second method for creating DataFrames is through a programmatic interface that allows you to construct a schema and then apply it to an existing RDD. While this method is more verbose, it allows you to construct DataFrames when the columns and their types are not known until runtime.
The Scala interface for Spark SQL supports automatically converting an RDD containing case classes to a DataFrame. The case class defines the schema of the table. The names of the arguments to the case class are read using reflection and become the names of the columns. Case classes can also be nested or contain complex types such as Sequences or Arrays. This RDD can be implicitly converted to a DataFrame and then be registered as a table. Tables can be used in subsequent SQL statements.
{% highlight scala %} // sc is an existing SparkContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc) // this is used to implicitly convert an RDD to a DataFrame. import sqlContext.implicits._
// Define the schema using a case class. // Note: Case classes in Scala 2.10 can support only up to 22 fields. To work around this limit, // you can use custom classes that implement the Product interface. case class Person(name: String, age: Int)
// Create an RDD of Person objects and register it as a table. val people = sc.textFile(“examples/src/main/resources/people.txt”).map(_.split(“,”)).map(p => Person(p(0), p(1).trim.toInt)).toDF() people.registerTempTable(“people”)
// SQL statements can be run by using the sql methods provided by sqlContext. val teenagers = sqlContext.sql(“SELECT name, age FROM people WHERE age >= 13 AND age <= 19”)
// The results of SQL queries are DataFrames and support all the normal RDD operations. // The columns of a row in the result can be accessed by field index: teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
// or by field name: teenagers.map(t => "Name: " + t.getAsString).collect().foreach(println)
// row.getValuesMap[T] retrieves multiple columns at once into a Map[String, T] teenagers.map(_.getValuesMapAny)).collect().foreach(println) // Map(“name” -> “Justin”, “age” -> 19) {% endhighlight %}
Spark SQL supports automatically converting an RDD of JavaBeans into a DataFrame. The BeanInfo, obtained using reflection, defines the schema of the table. Currently, Spark SQL does not support JavaBeans that contain nested or contain complex types such as Lists or Arrays. You can create a JavaBean by creating a class that implements Serializable and has getters and setters for all of its fields.
{% highlight java %}
public static class Person implements Serializable { private String name; private int age;
public String getName() { return name; }
public void setName(String name) { this.name = name; }
public int getAge() { return age; }
public void setAge(int age) { this.age = age; } }
{% endhighlight %}
A schema can be applied to an existing RDD by calling createDataFrame
and providing the Class object for the JavaBean.
{% highlight java %} // sc is an existing JavaSparkContext. SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
// Load a text file and convert each line to a JavaBean. JavaRDD people = sc.textFile(“examples/src/main/resources/people.txt”).map( new Function<String, Person>() { public Person call(String line) throws Exception { String[] parts = line.split(“,”);
Person person = new Person(); person.setName(parts[0]); person.setAge(Integer.parseInt(parts[1].trim())); return person; }
});
// Apply a schema to an RDD of JavaBeans and register it as a table. DataFrame schemaPeople = sqlContext.createDataFrame(people, Person.class); schemaPeople.registerTempTable(“people”);
// SQL can be run over RDDs that have been registered as tables. DataFrame teenagers = sqlContext.sql(“SELECT name FROM people WHERE age >= 13 AND age <= 19”)
// The results of SQL queries are DataFrames and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. List teenagerNames = teenagers.javaRDD().map(new Function<Row, String>() { public String call(Row row) { return "Name: " + row.getString(0); } }).collect();
{% endhighlight %}
Spark SQL can convert an RDD of Row objects to a DataFrame, inferring the datatypes. Rows are constructed by passing a list of key/value pairs as kwargs to the Row class. The keys of this list define the column names of the table, and the types are inferred by looking at the first row. Since we currently only look at the first row, it is important that there is no missing data in the first row of the RDD. In future versions we plan to more completely infer the schema by looking at more data, similar to the inference that is performed on JSON files.
{% highlight python %}
from pyspark.sql import SQLContext, Row sqlContext = SQLContext(sc)
lines = sc.textFile(“examples/src/main/resources/people.txt”) parts = lines.map(lambda l: l.split(“,”)) people = parts.map(lambda p: Row(name=p[0], age=int(p[1])))
schemaPeople = sqlContext.createDataFrame(people) schemaPeople.registerTempTable(“people”)
teenagers = sqlContext.sql(“SELECT name FROM people WHERE age >= 13 AND age <= 19”)
teenNames = teenagers.map(lambda p: "Name: " + p.name) for teenName in teenNames.collect(): print(teenName) {% endhighlight %}
When case classes cannot be defined ahead of time (for example, the structure of records is encoded in a string, or a text dataset will be parsed and fields will be projected differently for different users), a DataFrame
can be created programmatically with three steps.
Row
s from the original RDD;StructType
matching the structure of Row
s in the RDD created in Step 1.Row
s via createDataFrame
method provided by SQLContext
.For example: {% highlight scala %} // sc is an existing SparkContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// Create an RDD val people = sc.textFile(“examples/src/main/resources/people.txt”)
// The schema is encoded in a string val schemaString = “name age”
// Import Row. import org.apache.spark.sql.Row;
// Import Spark SQL data types import org.apache.spark.sql.types.{StructType,StructField,StringType};
// Generate the schema based on the string of schema val schema = StructType( schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, true)))
// Convert records of the RDD (people) to Rows. val rowRDD = people.map(_.split(“,”)).map(p => Row(p(0), p(1).trim))
// Apply the schema to the RDD. val peopleDataFrame = sqlContext.createDataFrame(rowRDD, schema)
// Register the DataFrames as a table. peopleDataFrame.registerTempTable(“people”)
// SQL statements can be run by using the sql methods provided by sqlContext. val results = sqlContext.sql(“SELECT name FROM people”)
// The results of SQL queries are DataFrames and support all the normal RDD operations. // The columns of a row in the result can be accessed by field index or by field name. results.map(t => "Name: " + t(0)).collect().foreach(println) {% endhighlight %}
When JavaBean classes cannot be defined ahead of time (for example, the structure of records is encoded in a string, or a text dataset will be parsed and fields will be projected differently for different users), a DataFrame
can be created programmatically with three steps.
Row
s from the original RDD;StructType
matching the structure of Row
s in the RDD created in Step 1.Row
s via createDataFrame
method provided by SQLContext
.For example: {% highlight java %} import org.apache.spark.api.java.function.Function; // Import factory methods provided by DataTypes. import org.apache.spark.sql.types.DataTypes; // Import StructType and StructField import org.apache.spark.sql.types.StructType; import org.apache.spark.sql.types.StructField; // Import Row. import org.apache.spark.sql.Row; // Import RowFactory. import org.apache.spark.sql.RowFactory;
// sc is an existing JavaSparkContext. SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
// Load a text file and convert each line to a JavaBean. JavaRDD people = sc.textFile(“examples/src/main/resources/people.txt”);
// The schema is encoded in a string String schemaString = “name age”;
// Generate the schema based on the string of schema List fields = new ArrayList(); for (String fieldName: schemaString.split(" ")) { fields.add(DataTypes.createStructField(fieldName, DataTypes.StringType, true)); } StructType schema = DataTypes.createStructType(fields);
// Convert records of the RDD (people) to Rows. JavaRDD rowRDD = people.map( new Function<String, Row>() { public Row call(String record) throws Exception { String[] fields = record.split(“,”); return RowFactory.create(fields[0], fields[1].trim()); } });
// Apply the schema to the RDD. DataFrame peopleDataFrame = sqlContext.createDataFrame(rowRDD, schema);
// Register the DataFrame as a table. peopleDataFrame.registerTempTable(“people”);
// SQL can be run over RDDs that have been registered as tables. DataFrame results = sqlContext.sql(“SELECT name FROM people”);
// The results of SQL queries are DataFrames and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. List names = results.javaRDD().map(new Function<Row, String>() { public String call(Row row) { return "Name: " + row.getString(0); } }).collect();
{% endhighlight %}
When a dictionary of kwargs cannot be defined ahead of time (for example, the structure of records is encoded in a string, or a text dataset will be parsed and fields will be projected differently for different users), a DataFrame
can be created programmatically with three steps.
StructType
matching the structure of tuples or lists in the RDD created in the step 1.createDataFrame
method provided by SQLContext
.For example: {% highlight python %}
from pyspark.sql import SQLContext from pyspark.sql.types import *
sqlContext = SQLContext(sc)
lines = sc.textFile(“examples/src/main/resources/people.txt”) parts = lines.map(lambda l: l.split(“,”)) people = parts.map(lambda p: (p[0], p[1].strip()))
schemaString = “name age”
fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()] schema = StructType(fields)
schemaPeople = sqlContext.createDataFrame(people, schema)
schemaPeople.registerTempTable(“people”)
results = sqlContext.sql(“SELECT name FROM people”)
names = results.map(lambda p: "Name: " + p.name) for name in names.collect(): print(name) {% endhighlight %}
Spark SQL supports operating on a variety of data sources through the DataFrame
interface. A DataFrame can be operated on as normal RDDs and can also be registered as a temporary table. Registering a DataFrame as a table allows you to run SQL queries over its data. This section describes the general methods for loading and saving data using the Spark Data Sources and then goes into specific options that are available for the built-in data sources.
In the simplest form, the default data source (parquet
unless otherwise configured by spark.sql.sources.default
) will be used for all operations.
{% highlight scala %} val df = sqlContext.read.load(“examples/src/main/resources/users.parquet”) df.select(“name”, “favorite_color”).write.save(“namesAndFavColors.parquet”) {% endhighlight %}
{% highlight java %}
DataFrame df = sqlContext.read().load(“examples/src/main/resources/users.parquet”); df.select(“name”, “favorite_color”).write().save(“namesAndFavColors.parquet”);
{% endhighlight %}
{% highlight python %}
df = sqlContext.read.load(“examples/src/main/resources/users.parquet”) df.select(“name”, “favorite_color”).write.save(“namesAndFavColors.parquet”)
{% endhighlight %}
{% highlight r %} df <- loadDF(sqlContext, “people.parquet”) saveDF(select(df, “name”, “age”), “namesAndAges.parquet”) {% endhighlight %}
You can also manually specify the data source that will be used along with any extra options that you would like to pass to the data source. Data sources are specified by their fully qualified name (i.e., org.apache.spark.sql.parquet
), but for built-in sources you can also use their short names (json
, parquet
, jdbc
). DataFrames of any type can be converted into other types using this syntax.
{% highlight scala %} val df = sqlContext.read.format(“json”).load(“examples/src/main/resources/people.json”) df.select(“name”, “age”).write.format(“parquet”).save(“namesAndAges.parquet”) {% endhighlight %}
{% highlight java %}
DataFrame df = sqlContext.read().format(“json”).load(“examples/src/main/resources/people.json”); df.select(“name”, “age”).write().format(“parquet”).save(“namesAndAges.parquet”);
{% endhighlight %}
{% highlight python %}
df = sqlContext.read.load(“examples/src/main/resources/people.json”, format=“json”) df.select(“name”, “age”).write.save(“namesAndAges.parquet”, format=“parquet”)
{% endhighlight %}
{% highlight r %}
df <- loadDF(sqlContext, “people.json”, “json”) saveDF(select(df, “name”, “age”), “namesAndAges.parquet”, “parquet”)
{% endhighlight %}
Save operations can optionally take a SaveMode
, that specifies how to handle existing data if present. It is important to realize that these save modes do not utilize any locking and are not atomic. Additionally, when performing a Overwrite
, the data will be deleted before writing out the new data.
When working with a HiveContext
, DataFrames
can also be saved as persistent tables using the saveAsTable
command. Unlike the registerTempTable
command, saveAsTable
will materialize the contents of the dataframe and create a pointer to the data in the HiveMetastore. Persistent tables will still exist even after your Spark program has restarted, as long as you maintain your connection to the same metastore. A DataFrame for a persistent table can be created by calling the table
method on a SQLContext
with the name of the table.
By default saveAsTable
will create a “managed table”, meaning that the location of the data will be controlled by the metastore. Managed tables will also have their data deleted automatically when a table is dropped.
Parquet is a columnar format that is supported by many other data processing systems. Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data.
Using the data from the above example:
{% highlight scala %} // sqlContext from the previous example is used in this example. // This is used to implicitly convert an RDD to a DataFrame. import sqlContext.implicits._
val people: RDD[Person] = ... // An RDD of case class objects, from the previous example.
// The RDD is implicitly converted to a DataFrame by implicits, allowing it to be stored using Parquet. people.write.parquet(“people.parquet”)
// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved. // The result of loading a Parquet file is also a DataFrame. val parquetFile = sqlContext.read.parquet(“people.parquet”)
//Parquet files can also be registered as tables and then used in SQL statements. parquetFile.registerTempTable(“parquetFile”) val teenagers = sqlContext.sql(“SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19”) teenagers.map(t => "Name: " + t(0)).collect().foreach(println) {% endhighlight %}
{% highlight java %} // sqlContext from the previous example is used in this example.
DataFrame schemaPeople = ... // The DataFrame from the previous example.
// DataFrames can be saved as Parquet files, maintaining the schema information. schemaPeople.write().parquet(“people.parquet”);
// Read in the Parquet file created above. Parquet files are self-describing so the schema is preserved. // The result of loading a parquet file is also a DataFrame. DataFrame parquetFile = sqlContext.read().parquet(“people.parquet”);
// Parquet files can also be registered as tables and then used in SQL statements. parquetFile.registerTempTable(“parquetFile”); DataFrame teenagers = sqlContext.sql(“SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19”); List teenagerNames = teenagers.javaRDD().map(new Function<Row, String>() { public String call(Row row) { return "Name: " + row.getString(0); } }).collect(); {% endhighlight %}
{% highlight python %}
schemaPeople # The DataFrame from the previous example.
schemaPeople.write.parquet(“people.parquet”)
parquetFile = sqlContext.read.parquet(“people.parquet”)
parquetFile.registerTempTable(“parquetFile”); teenagers = sqlContext.sql(“SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19”) teenNames = teenagers.map(lambda p: "Name: " + p.name) for teenName in teenNames.collect(): print(teenName) {% endhighlight %}
{% highlight r %}
schemaPeople # The DataFrame from the previous example.
saveAsParquetFile(schemaPeople, “people.parquet”)
parquetFile <- parquetFile(sqlContext, “people.parquet”)
registerTempTable(parquetFile, “parquetFile”); teenagers <- sql(sqlContext, “SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19”) teenNames <- map(teenagers, function(p) { paste(“Name:”, p$name)}) for (teenName in collect(teenNames)) { cat(teenName, “\n”) } {% endhighlight %}
{% highlight python %}
sqlContext.sql(“REFRESH TABLE my_table”) {% endhighlight %}
{% highlight sql %}
CREATE TEMPORARY TABLE parquetTable USING org.apache.spark.sql.parquet OPTIONS ( path “examples/src/main/resources/people.parquet” )
SELECT * FROM parquetTable
{% endhighlight %}
Table partitioning is a common optimization approach used in systems like Hive. In a partitioned table, data are usually stored in different directories, with partitioning column values encoded in the path of each partition directory. The Parquet data source is now able to discover and infer partitioning information automatically. For example, we can store all our previously used population data into a partitioned table using the following directory structure, with two extra columns, gender
and country
as partitioning columns:
{% highlight text %}
path └── to └── table ├── gender=male │ ├── ... │ │ │ ├── country=US │ │ └── data.parquet │ ├── country=CN │ │ └── data.parquet │ └── ... └── gender=female ├── ... │ ├── country=US │ └── data.parquet ├── country=CN │ └── data.parquet └── ...
{% endhighlight %}
By passing path/to/table
to either SQLContext.read.parquet
or SQLContext.read.load
, Spark SQL will automatically extract the partitioning information from the paths. Now the schema of the returned DataFrame becomes:
{% highlight text %}
root |-- name: string (nullable = true) |-- age: long (nullable = true) |-- gender: string (nullable = true) |-- country: string (nullable = true)
{% endhighlight %}
Notice that the data types of the partitioning columns are automatically inferred. Currently, numeric data types and string type are supported. Sometimes users may not want to automatically infer the data types of the partitioning columns. For these use cases, the automatic type inference can be configured by spark.sql.sources.partitionColumnTypeInference.enabled
, which is default to true
. When type inference is disabled, string type will be used for the partitioning columns.
Like ProtocolBuffer, Avro, and Thrift, Parquet also supports schema evolution. Users can start with a simple schema, and gradually add more columns to the schema as needed. In this way, users may end up with multiple Parquet files with different but mutually compatible schemas. The Parquet data source is now able to automatically detect this case and merge schemas of all these files.
Since schema merging is a relatively expensive operation, and is not a necessity in most cases, we turned it off by default starting from 1.5.0. You may enable it by
mergeSchema
to true
when reading Parquet files (as shown in the examples below), orspark.sql.parquet.mergeSchema
to true
.{% highlight scala %} // sqlContext from the previous example is used in this example. // This is used to implicitly convert an RDD to a DataFrame. import sqlContext.implicits._
// Create a simple DataFrame, stored into a partition directory val df1 = sc.makeRDD(1 to 5).map(i => (i, i * 2)).toDF(“single”, “double”) df1.write.parquet(“data/test_table/key=1”)
// Create another DataFrame in a new partition directory, // adding a new column and dropping an existing column val df2 = sc.makeRDD(6 to 10).map(i => (i, i * 3)).toDF(“single”, “triple”) df2.write.parquet(“data/test_table/key=2”)
// Read the partitioned table val df3 = sqlContext.read.option(“mergeSchema”, “true”).parquet(“data/test_table”) df3.printSchema()
// The final schema consists of all 3 columns in the Parquet files together // with the partitioning column appeared in the partition directory paths. // root // |-- single: int (nullable = true) // |-- double: int (nullable = true) // |-- triple: int (nullable = true) // |-- key : int (nullable = true) {% endhighlight %}
{% highlight python %}
df1 = sqlContext.createDataFrame(sc.parallelize(range(1, 6))
.map(lambda i: Row(single=i, double=i * 2))) df1.write.parquet(“data/test_table/key=1”)
df2 = sqlContext.createDataFrame(sc.parallelize(range(6, 11)) .map(lambda i: Row(single=i, triple=i * 3))) df2.write.parquet(“data/test_table/key=2”)
df3 = sqlContext.read.option(“mergeSchema”, “true”).parquet(“data/test_table”) df3.printSchema()
{% endhighlight %}
{% highlight r %}
saveDF(df1, “data/test_table/key=1”, “parquet”, “overwrite”)
saveDF(df2, “data/test_table/key=2”, “parquet”, “overwrite”)
df3 <- loadDF(sqlContext, “data/test_table”, “parquet”, mergeSchema=“true”) printSchema(df3)
{% endhighlight %}
When reading from and writing to Hive metastore Parquet tables, Spark SQL will try to use its own Parquet support instead of Hive SerDe for better performance. This behavior is controlled by the spark.sql.hive.convertMetastoreParquet
configuration, and is turned on by default.
There are two key differences between Hive and Parquet from the perspective of table schema processing.
Due to this reason, we must reconcile Hive metastore schema with Parquet schema when converting a Hive metastore Parquet table to a Spark SQL Parquet table. The reconciliation rules are:
Fields that have the same name in both schema must have the same data type regardless of nullability. The reconciled field should have the data type of the Parquet side, so that nullability is respected.
The reconciled schema contains exactly those fields defined in Hive metastore schema.
Spark SQL caches Parquet metadata for better performance. When Hive metastore Parquet table conversion is enabled, metadata of those converted tables are also cached. If these tables are updated by Hive or other external tools, you need to refresh them manually to ensure consistent metadata.
{% highlight scala %} // sqlContext is an existing HiveContext sqlContext.refreshTable(“my_table”) {% endhighlight %}
{% highlight java %} // sqlContext is an existing HiveContext sqlContext.refreshTable(“my_table”) {% endhighlight %}
{% highlight python %}
sqlContext.refreshTable(“my_table”) {% endhighlight %}
{% highlight sql %} REFRESH TABLE my_table; {% endhighlight %}
Configuration of Parquet can be done using the setConf
method on SQLContext
or by running SET key=value
commands using SQL.
Note that the file that is offered as a json file is not a typical JSON file. Each line must contain a separate, self-contained valid JSON object. As a consequence, a regular multi-line JSON file will most often fail.
{% highlight scala %} // sc is an existing SparkContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// A JSON dataset is pointed to by path. // The path can be either a single text file or a directory storing text files. val path = “examples/src/main/resources/people.json” val people = sqlContext.read.json(path)
// The inferred schema can be visualized using the printSchema() method. people.printSchema() // root // |-- age: integer (nullable = true) // |-- name: string (nullable = true)
// Register this DataFrame as a table. people.registerTempTable(“people”)
// SQL statements can be run by using the sql methods provided by sqlContext. val teenagers = sqlContext.sql(“SELECT name FROM people WHERE age >= 13 AND age <= 19”)
// Alternatively, a DataFrame can be created for a JSON dataset represented by // an RDD[String] storing one JSON object per string. val anotherPeopleRDD = sc.parallelize( “““{“name”:“Yin”,“address”:{“city”:“Columbus”,“state”:“Ohio”}}””” :: Nil) val anotherPeople = sqlContext.read.json(anotherPeopleRDD) {% endhighlight %}
Note that the file that is offered as a json file is not a typical JSON file. Each line must contain a separate, self-contained valid JSON object. As a consequence, a regular multi-line JSON file will most often fail.
{% highlight java %} // sc is an existing JavaSparkContext. SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
// A JSON dataset is pointed to by path. // The path can be either a single text file or a directory storing text files. DataFrame people = sqlContext.read().json(“examples/src/main/resources/people.json”);
// The inferred schema can be visualized using the printSchema() method. people.printSchema(); // root // |-- age: integer (nullable = true) // |-- name: string (nullable = true)
// Register this DataFrame as a table. people.registerTempTable(“people”);
// SQL statements can be run by using the sql methods provided by sqlContext. DataFrame teenagers = sqlContext.sql(“SELECT name FROM people WHERE age >= 13 AND age <= 19”);
// Alternatively, a DataFrame can be created for a JSON dataset represented by // an RDD[String] storing one JSON object per string. List jsonData = Arrays.asList( “{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}”); JavaRDD anotherPeopleRDD = sc.parallelize(jsonData); DataFrame anotherPeople = sqlContext.read().json(anotherPeopleRDD); {% endhighlight %}
Note that the file that is offered as a json file is not a typical JSON file. Each line must contain a separate, self-contained valid JSON object. As a consequence, a regular multi-line JSON file will most often fail.
{% highlight python %}
from pyspark.sql import SQLContext sqlContext = SQLContext(sc)
people = sqlContext.read.json(“examples/src/main/resources/people.json”)
people.printSchema()
people.registerTempTable(“people”)
sqlContext
.teenagers = sqlContext.sql(“SELECT name FROM people WHERE age >= 13 AND age <= 19”)
anotherPeopleRDD = sc.parallelize([ ‘{“name”:“Yin”,“address”:{“city”:“Columbus”,“state”:“Ohio”}}’]) anotherPeople = sqlContext.jsonRDD(anotherPeopleRDD) {% endhighlight %}
Note that the file that is offered as a json file is not a typical JSON file. Each line must contain a separate, self-contained valid JSON object. As a consequence, a regular multi-line JSON file will most often fail.
{% highlight r %}
sqlContext <- sparkRSQL.init(sc)
path <- “examples/src/main/resources/people.json”
people <- jsonFile(sqlContext, path)
printSchema(people)
registerTempTable(people, “people”)
sqlContext
.teenagers <- sql(sqlContext, “SELECT name FROM people WHERE age >= 13 AND age <= 19”) {% endhighlight %}
{% highlight sql %}
CREATE TEMPORARY TABLE jsonTable USING org.apache.spark.sql.json OPTIONS ( path “examples/src/main/resources/people.json” )
SELECT * FROM jsonTable
{% endhighlight %}
Spark SQL also supports reading and writing data stored in Apache Hive. However, since Hive has a large number of dependencies, it is not included in the default Spark assembly. Hive support is enabled by adding the -Phive
and -Phive-thriftserver
flags to Spark's build. This command builds a new assembly jar that includes Hive. Note that this Hive assembly jar must also be present on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries (SerDes) in order to access data stored in Hive.
Configuration of Hive is done by placing your hive-site.xml
file in conf/
. Please note when running the query on a YARN cluster (yarn-cluster
mode), the datanucleus
jars under the lib_managed/jars
directory and hive-site.xml
under conf/
directory need to be available on the driver and all executors launched by the YARN cluster. The convenient way to do this is adding them through the --jars
option and --file
option of the spark-submit
command.
When working with Hive one must construct a HiveContext
, which inherits from SQLContext
, and adds support for finding tables in the MetaStore and writing queries using HiveQL. Users who do not have an existing Hive deployment can still create a HiveContext
. When not configured by the hive-site.xml, the context automatically creates metastore_db
and warehouse
in the current directory.
{% highlight scala %} // sc is an existing SparkContext. val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.sql(“CREATE TABLE IF NOT EXISTS src (key INT, value STRING)”) sqlContext.sql(“LOAD DATA LOCAL INPATH ‘examples/src/main/resources/kv1.txt’ INTO TABLE src”)
// Queries are expressed in HiveQL sqlContext.sql(“FROM src SELECT key, value”).collect().foreach(println) {% endhighlight %}
When working with Hive one must construct a HiveContext
, which inherits from SQLContext
, and adds support for finding tables in the MetaStore and writing queries using HiveQL. In addition to the sql
method a HiveContext
also provides an hql
method, which allows queries to be expressed in HiveQL.
{% highlight java %} // sc is an existing JavaSparkContext. HiveContext sqlContext = new org.apache.spark.sql.hive.HiveContext(sc.sc);
sqlContext.sql(“CREATE TABLE IF NOT EXISTS src (key INT, value STRING)”); sqlContext.sql(“LOAD DATA LOCAL INPATH ‘examples/src/main/resources/kv1.txt’ INTO TABLE src”);
// Queries are expressed in HiveQL. Row[] results = sqlContext.sql(“FROM src SELECT key, value”).collect();
{% endhighlight %}
When working with Hive one must construct a HiveContext
, which inherits from SQLContext
, and adds support for finding tables in the MetaStore and writing queries using HiveQL. {% highlight python %}
from pyspark.sql import HiveContext sqlContext = HiveContext(sc)
sqlContext.sql(“CREATE TABLE IF NOT EXISTS src (key INT, value STRING)”) sqlContext.sql(“LOAD DATA LOCAL INPATH ‘examples/src/main/resources/kv1.txt’ INTO TABLE src”)
results = sqlContext.sql(“FROM src SELECT key, value”).collect()
{% endhighlight %}
When working with Hive one must construct a HiveContext
, which inherits from SQLContext
, and adds support for finding tables in the MetaStore and writing queries using HiveQL. {% highlight r %}
sqlContext <- sparkRHive.init(sc)
sql(sqlContext, “CREATE TABLE IF NOT EXISTS src (key INT, value STRING)”) sql(sqlContext, “LOAD DATA LOCAL INPATH ‘examples/src/main/resources/kv1.txt’ INTO TABLE src”)
results <- collect(sql(sqlContext, “FROM src SELECT key, value”))
{% endhighlight %}
One of the most important pieces of Spark SQL's Hive support is interaction with Hive metastore, which enables Spark SQL to access metadata of Hive tables. Starting from Spark 1.4.0, a single binary build of Spark SQL can be used to query different versions of Hive metastores, using the configuration described below. Note that independent of the version of Hive that is being used to talk to the metastore, internally Spark SQL will compile against Hive 1.2.1 and use those classes for internal execution (serdes, UDFs, UDAFs, etc).
The following options can be used to configure the version of Hive that is used to retrieve metadata:
Spark SQL also includes a data source that can read data from other databases using JDBC. This functionality should be preferred over using JdbcRDD. This is because the results are returned as a DataFrame and they can easily be processed in Spark SQL or joined with other data sources. The JDBC data source is also easier to use from Java or Python as it does not require the user to provide a ClassTag. (Note that this is different than the Spark SQL JDBC server, which allows other applications to run queries using Spark SQL).
To get started you will need to include the JDBC driver for you particular database on the spark classpath. For example, to connect to postgres from the Spark Shell you would run the following command:
{% highlight bash %} SPARK_CLASSPATH=postgresql-9.3-1102-jdbc41.jar bin/spark-shell {% endhighlight %}
Tables from the remote database can be loaded as a DataFrame or Spark SQL Temporary table using the Data Sources API. The following options are supported:
{% highlight scala %} val jdbcDF = sqlContext.read.format(“jdbc”).options( Map(“url” -> “jdbc:postgresql:dbserver”, “dbtable” -> “schema.tablename”)).load() {% endhighlight %}
{% highlight java %}
Map<String, String> options = new HashMap<String, String>(); options.put(“url”, “jdbc:postgresql:dbserver”); options.put(“dbtable”, “schema.tablename”);
DataFrame jdbcDF = sqlContext.read().format(“jdbc”). options(options).load(); {% endhighlight %}
{% highlight python %}
df = sqlContext.read.format(‘jdbc’).options(url=‘jdbc:postgresql:dbserver’, dbtable=‘schema.tablename’).load()
{% endhighlight %}
{% highlight r %}
df <- loadDF(sqlContext, source=“jdbc”, url=“jdbc:postgresql:dbserver”, dbtable=“schema.tablename”)
{% endhighlight %}
{% highlight sql %}
CREATE TEMPORARY TABLE jdbcTable USING org.apache.spark.sql.jdbc OPTIONS ( url “jdbc:postgresql:dbserver”, dbtable “schema.tablename” )
{% endhighlight %}
For some workloads it is possible to improve performance by either caching data in memory, or by turning on some experimental options.
Spark SQL can cache tables using an in-memory columnar format by calling sqlContext.cacheTable("tableName")
or dataFrame.cache()
. Then Spark SQL will scan only required columns and will automatically tune compression to minimize memory usage and GC pressure. You can call sqlContext.uncacheTable("tableName")
to remove the table from memory.
Configuration of in-memory caching can be done using the setConf
method on SQLContext
or by running SET key=value
commands using SQL.
The following options can also be used to tune the performance of query execution. It is possible that these options will be deprecated in future release as more optimizations are performed automatically.
Spark SQL can also act as a distributed query engine using its JDBC/ODBC or command-line interface. In this mode, end-users or applications can interact with Spark SQL directly to run SQL queries, without the need to write any code.
The Thrift JDBC/ODBC server implemented here corresponds to the HiveServer2
in Hive 1.2.1 You can test the JDBC server with the beeline script that comes with either Spark or Hive 1.2.1.
To start the JDBC/ODBC server, run the following in the Spark directory:
./sbin/start-thriftserver.sh
This script accepts all bin/spark-submit
command line options, plus a --hiveconf
option to specify Hive properties. You may run ./sbin/start-thriftserver.sh --help
for a complete list of all available options. By default, the server listens on localhost:10000. You may override this behaviour via either environment variables, i.e.:
{% highlight bash %} export HIVE_SERVER2_THRIFT_PORT= export HIVE_SERVER2_THRIFT_BIND_HOST= ./sbin/start-thriftserver.sh
--master
... {% endhighlight %}
or system properties:
{% highlight bash %} ./sbin/start-thriftserver.sh
--hiveconf hive.server2.thrift.port=
--hiveconf hive.server2.thrift.bind.host=
--master ... {% endhighlight %}
Now you can use beeline to test the Thrift JDBC/ODBC server:
./bin/beeline
Connect to the JDBC/ODBC server in beeline with:
beeline> !connect jdbc:hive2://localhost:10000
Beeline will ask you for a username and password. In non-secure mode, simply enter the username on your machine and a blank password. For secure mode, please follow the instructions given in the beeline documentation.
Configuration of Hive is done by placing your hive-site.xml
file in conf/
.
You may also use the beeline script that comes with Hive.
Thrift JDBC server also supports sending thrift RPC messages over HTTP transport. Use the following setting to enable HTTP mode as system property or in hive-site.xml
file in conf/
:
hive.server2.transport.mode - Set this to value: http hive.server2.thrift.http.port - HTTP port number fo listen on; default is 10001 hive.server2.http.endpoint - HTTP endpoint; default is cliservice
To test, use beeline to connect to the JDBC/ODBC server in http mode with:
beeline> !connect jdbc:hive2://<host>:<port>/<database>?hive.server2.transport.mode=http;hive.server2.thrift.http.path=<http_endpoint>
The Spark SQL CLI is a convenient tool to run the Hive metastore service in local mode and execute queries input from the command line. Note that the Spark SQL CLI cannot talk to the Thrift JDBC server.
To start the Spark SQL CLI, run the following in the Spark directory:
./bin/spark-sql
Configuration of Hive is done by placing your hive-site.xml
file in conf/
. You may run ./bin/spark-sql --help
for a complete list of all available options.
spark.sql.tungsten.enabled
to `false.spark.sql.parquet.mergeSchema
to true
..
) to qualify the column or access nested values. For example df['table.column.nestedField']
. However, this means that if your column name contains any dots you must now escape them using backticks (e.g., table.`column.with.dots`.nested
).spark.sql.inMemoryColumnarStorage.partitionPruning
to false
.BigDecimal
objects, a precision of (38, 18) is now used. When no precision is specified in DDL then the default remains Decimal(10, 0)
.sql
dialect, floating point numbers are now parsed as decimal. HiveQL parsing remains unchanged.REFRESH TABLE
SQL command or HiveContext
's refreshTable
method to include those new files to the table. For a DataFrame representing a JSON dataset, users need to recreate the DataFrame and the new DataFrame will include new files.Based on user feedback, we created a new, more fluid API for reading data in (SQLContext.read
) and writing data out (DataFrame.write
), and deprecated the old APIs (e.g. SQLContext.parquetFile
, SQLContext.jsonFile
).
See the API docs for SQLContext.read
( Scala, Java, Python ) and DataFrame.write
( Scala, Java, Python ) more information.
Based on user feedback, we changed the default behavior of DataFrame.groupBy().agg()
to retain the grouping columns in the resulting DataFrame
. To keep the behavior in 1.3, set spark.sql.retainGroupColumns
to false
.
// In 1.3.x, in order for the grouping column “department” to show up, // it must be included explicitly as part of the agg function call. df.groupBy(“department”).agg($“department”, max(“age”), sum(“expense”))
// In 1.4+, grouping column “department” is included automatically. df.groupBy(“department”).agg(max(“age”), sum(“expense”))
// Revert to 1.3 behavior (not retaining grouping column) by: sqlContext.setConf(“spark.sql.retainGroupColumns”, “false”)
{% endhighlight %}
// In 1.3.x, in order for the grouping column “department” to show up, // it must be included explicitly as part of the agg function call. df.groupBy(“department”).agg(col(“department”), max(“age”), sum(“expense”));
// In 1.4+, grouping column “department” is included automatically. df.groupBy(“department”).agg(max(“age”), sum(“expense”));
// Revert to 1.3 behavior (not retaining grouping column) by: sqlContext.setConf(“spark.sql.retainGroupColumns”, “false”);
{% endhighlight %}
import pyspark.sql.functions as func
df.groupBy(“department”).agg(“department”), func.max(“age”), func.sum(“expense”))
df.groupBy(“department”).agg(func.max(“age”), func.sum(“expense”))
sqlContext.setConf(“spark.sql.retainGroupColumns”, “false”)
{% endhighlight %}
In Spark 1.3 we removed the “Alpha” label from Spark SQL and as part of this did a cleanup of the available APIs. From Spark 1.3 onwards, Spark SQL will provide binary compatibility with other releases in the 1.X series. This compatibility guarantee excludes APIs that are explicitly marked as unstable (i.e., DeveloperAPI or Experimental).
The largest change that users will notice when upgrading to Spark SQL 1.3 is that SchemaRDD
has been renamed to DataFrame
. This is primarily because DataFrames no longer inherit from RDD directly, but instead provide most of the functionality that RDDs provide though their own implementation. DataFrames can still be converted to RDDs by calling the .rdd
method.
In Scala there is a type alias from SchemaRDD
to DataFrame
to provide source compatibility for some use cases. It is still recommended that users update their code to use DataFrame
instead. Java and Python users will need to update their code.
Prior to Spark 1.3 there were separate Java compatible classes (JavaSQLContext
and JavaSchemaRDD
) that mirrored the Scala API. In Spark 1.3 the Java API and Scala API have been unified. Users of either language should use SQLContext
and DataFrame
. In general theses classes try to use types that are usable from both languages (i.e. Array
instead of language specific collections). In some cases where no common type exists (e.g., for passing in closures or Maps) function overloading is used instead.
Additionally the Java specific types API has been removed. Users of both Scala and Java should use the classes present in org.apache.spark.sql.types
to describe schema programmatically.
Many of the code examples prior to Spark 1.3 started with import sqlContext._
, which brought all of the functions from sqlContext into scope. In Spark 1.3 we have isolated the implicit conversions for converting RDD
s into DataFrame
s into an object inside of the SQLContext
. Users should now write import sqlContext.implicits._
.
Additionally, the implicit conversions now only augment RDDs that are composed of Product
s (i.e., case classes or tuples) with a method toDF
, instead of applying automatically.
When using function inside of the DSL (now replaced with the DataFrame
API) users used to import org.apache.spark.sql.catalyst.dsl
. Instead the public dataframe functions API should be used: import org.apache.spark.sql.functions._
.
Spark 1.3 removes the type aliases that were present in the base sql package for DataType
. Users should instead import the classes in org.apache.spark.sql.types
sqlContext.udf
(Java & Scala)Functions that are used to register UDFs, either for use in the DataFrame DSL or SQL, have been moved into the udf object in SQLContext
.
sqlContext.udf.register(“strLen”, (s: String) => s.length())
{% endhighlight %}
sqlContext.udf().register(“strLen”, (String s) -> s.length(), DataTypes.IntegerType);
{% endhighlight %}
Python UDF registration is unchanged.
When using DataTypes in Python you will need to construct them (i.e. StringType()
) instead of referencing a singleton.
To set a Fair Scheduler pool for a JDBC client session, users can set the spark.sql.thriftserver.scheduler.pool
variable:
SET spark.sql.thriftserver.scheduler.pool=accounting;
In Shark, default reducer number is 1 and is controlled by the property mapred.reduce.tasks
. Spark SQL deprecates this property in favor of spark.sql.shuffle.partitions
, whose default value is 200. Users may customize this property via SET
:
SET spark.sql.shuffle.partitions=10; SELECT page, count(*) c FROM logs_last_month_cached GROUP BY page ORDER BY c DESC LIMIT 10;
You may also put this property in hive-site.xml
to override the default value.
For now, the mapred.reduce.tasks
property is still recognized, and is converted to spark.sql.shuffle.partitions
automatically.
The shark.cache
table property no longer exists, and tables whose name end with _cached
are no longer automatically cached. Instead, we provide CACHE TABLE
and UNCACHE TABLE
statements to let user control table caching explicitly:
CACHE TABLE logs_last_month; UNCACHE TABLE logs_last_month;
NOTE: CACHE TABLE tbl
is now eager by default not lazy. Don’t need to trigger cache materialization manually anymore.
Spark SQL newly introduced a statement to let user control table caching whether or not lazy since Spark 1.2.0:
CACHE [LAZY] TABLE [AS SELECT] ...
Several caching related features are not supported yet:
Spark SQL is designed to be compatible with the Hive Metastore, SerDes and UDFs. Currently Hive SerDes and UDFs are based on Hive 1.2.1, and Spark SQL can be connected to different versions of Hive Metastore (from 0.12.0 to 1.2.1. Also see http://spark.apache.org/docs/latest/sql-programming-guide.html#interacting-with-different-versions-of-hive-metastore).
The Spark SQL Thrift JDBC server is designed to be “out of the box” compatible with existing Hive installations. You do not need to modify your existing Hive Metastore or change the data placement or partitioning of your tables.
Spark SQL supports the vast majority of Hive features, such as:
SELECT
GROUP BY
ORDER BY
CLUSTER BY
SORT BY
=
, ⇔
, ==
, <>
, <
, >
, >=
, <=
, etc)+
, -
, *
, /
, %
, etc)AND
, &&
, OR
, ||
, etc)sign
, ln
, cos
, etc)instr
, length
, printf
, etc)JOIN
{LEFT|RIGHT|FULL} OUTER JOIN
LEFT SEMI JOIN
CROSS JOIN
SELECT col FROM ( SELECT a + b AS col from t1) t2
CREATE TABLE
CREATE TABLE AS SELECT
ALTER TABLE
TINYINT
SMALLINT
INT
BIGINT
BOOLEAN
FLOAT
DOUBLE
STRING
BINARY
TIMESTAMP
DATE
ARRAY<>
MAP<>
STRUCT<>
Below is a list of Hive features that we don't support yet. Most of these features are rarely used in Hive deployments.
Major Hive Features
Esoteric Hive Features
UNION
typeHive Input/Output Formats
Hive Optimizations
A handful of Hive optimizations are not yet included in Spark. Some of these (such as indexes) are less important due to Spark SQL's in-memory computational model. Others are slotted for future releases of Spark SQL.
SET spark.sql.shuffle.partitions=[num_tasks];
”.STREAMTABLE
hint in join: Spark SQL does not follow the STREAMTABLE
hint.Spark SQL and DataFrames support the following data types:
ByteType
: Represents 1-byte signed integer numbers. The range of numbers is from -128
to 127
.ShortType
: Represents 2-byte signed integer numbers. The range of numbers is from -32768
to 32767
.IntegerType
: Represents 4-byte signed integer numbers. The range of numbers is from -2147483648
to 2147483647
.LongType
: Represents 8-byte signed integer numbers. The range of numbers is from -9223372036854775808
to 9223372036854775807
.FloatType
: Represents 4-byte single-precision floating point numbers.DoubleType
: Represents 8-byte double-precision floating point numbers.DecimalType
: Represents arbitrary-precision signed decimal numbers. Backed internally by java.math.BigDecimal
. A BigDecimal
consists of an arbitrary precision integer unscaled value and a 32-bit integer scale.StringType
: Represents character string values.BinaryType
: Represents byte sequence values.BooleanType
: Represents boolean values.TimestampType
: Represents values comprising values of fields year, month, day, hour, minute, and second.DateType
: Represents values comprising values of fields year, month, day.ArrayType(elementType, containsNull)
: Represents values comprising a sequence of elements with the type of elementType
. containsNull
is used to indicate if elements in a ArrayType
value can have null
values.MapType(keyType, valueType, valueContainsNull)
: Represents values comprising a set of key-value pairs. The data type of keys are described by keyType
and the data type of values are described by valueType
. For a MapType
value, keys are not allowed to have null
values. valueContainsNull
is used to indicate if values of a MapType
value can have null
values.StructType(fields)
: Represents values with the structure described by a sequence of StructField
s (fields
).StructField(name, dataType, nullable)
: Represents a field in a StructType
. The name of a field is indicated by name
. The data type of a field is indicated by dataType
. nullable
is used to indicate if values of this fields can have null
values.All data types of Spark SQL are located in the package org.apache.spark.sql.types
. You can access them by doing {% highlight scala %} import org.apache.spark.sql.types._ {% endhighlight %}
All data types of Spark SQL are located in the package of org.apache.spark.sql.types
. To access or create a data type, please use factory methods provided in org.apache.spark.sql.types.DataTypes
.
All data types of Spark SQL are located in the package of pyspark.sql.types
. You can access them by doing {% highlight python %} from pyspark.sql.types import * {% endhighlight %}
There is specially handling for not-a-number (NaN) when dealing with float
or double
types that does not exactly match standard floating point semantics. Specifically: