This post explains how to read Shapefiles with Apache Sedona and Spark.
A Shapefile is “an Esri vector data storage format for storing the location, shape, and attributes of geographic features.” The Shapefile format is proprietary, but the spec is open.
Shapefiles have many limitations but are extensively used, so it’s beneficial that they are readable by Sedona.
Let’s look at how to read Shapefiles with Sedona and Spark.
Let’s start by creating a Shapefile with GeoPandas and Shapely:
import geopandas as gpd from shapely.geometry import Point point1 = Point(0, 0) point2 = Point(1, 1) data = {"name": ["Point A", "Point B"], "value": [10, 20], "geometry": [point1, point2]} gdf = gpd.GeoDataFrame(data, geometry="geometry") gdf.to_file("/tmp/my_geodata.shp")
Here are the files that are output:
/tmp/ my_geodata.cpg my_geodata.dbf my_geodata.shp my_geodata.shx
Shapefiles are not stored in a single file. They contain data in many different files.
Here’s how to read a Shapefile into a Sedona DataFrame powered by Spark:
df = sedona.read.format("shapefile").load("/tmp/my_geodata.shp") df.show()
+-----------+-------+-----+ | geometry| name|value| +-----------+-------+-----+ |POINT (0 0)|Point A| 10| |POINT (1 1)|Point B| 20| +-----------+-------+-----+
You can also see the unique record number for each row in the Shapefile as follows:
df = ( sedona.read.format("shapefile") .option("key.name", "FID") .load("/tmp/my_geodata.shp") )
+-----------+---+-------+-----+ | geometry|FID| name|value| +-----------+---+-------+-----+ |POINT (0 0)| 1|Point A| 10| |POINT (1 1)| 2|Point B| 20| +-----------+---+-------+-----+
The name of the geometry column is geometry by default. You can change the name of the geometry column using the geometry.name option. Suppose one of the non-spatial attributes is named “geometry”, geometry.name must be configured to avoid conflict.
df = ( sedona.read.format("shapefile") .option("geometry.name", "geom") .load("/path/to/shapefile") )
The character encoding of string attributes are inferred from the .cpg file. If you see garbled values in string fields, you can manually specify the correct charset using the charset option. For example:
=== “Scala/Java”
```scala
val df = sedona.read.format("shapefile").option("charset", "UTF-8").load("/path/to/shapefile")
```
=== “Java”
```java
Dataset<Row> df = sedona.read().format("shapefile").option("charset", "UTF-8").load("/path/to/shapefile")
```
=== “Python”
```python
df = (
sedona.read.format("shapefile")
.option("charset", "UTF-8")
.load("/path/to/shapefile")
)
```
Let’s see how to load many Shapefiles into a Sedona DataFrame.
Suppose you have a directory with many Shapefiles as follows:
/tmp/shapefiles/ file1.cpg file1.dbf file1.shp file1.shx file2.cpg file2.dbf file2.shp file2.shx
The directory contains two .shp files and other supporting files.
Here’s how to load many Shapefiles into a Sedona DataFrame:
df = sedona.read.format("shapefile").load("/tmp/shapefiles") df.show()
+-----------+-------+-----+ | geometry| name|value| +-----------+-------+-----+ |POINT (0 0)|Point A| 10| |POINT (1 1)|Point B| 20| |POINT (2 2)|Point C| 10| |POINT (3 3)|Point D| 20| +-----------+-------+-----+
You can just pass the directory where the Shapefiles are stored, and the Sedona reader will pick them up.
The input path can be a directory containing one or multiple Shapefiles or a path to a .shp file.
.option("recursiveFileLookup", "true").Shapefiles are deeply integrated into the Esri ecosystem and extensively used in many services.
You can output a Shapefile from Esri and then read it with another engine like Sedona.
However, Esri created the Shapefile format in the early 1990s, so it has many limitations.
Here are some of the disadvantages of Shapefiles:
See this page for more information on the limitations of Shapefiles.
Due to these limitations, other options are worth investigating.
There are a variety of other file formats that are good for geometric data:
Sedona does not write Shapefiles for two main reasons:
Shapefiles are a legacy file format still used in many production applications. However, they have many limitations and aren’t the best option in a modern data pipeline unless you need compatibility with legacy systems.