layout: global title: Generic File Source Options displayTitle: Generic File Source Options license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
These generic options/configurations are effective only when using file-based sources: parquet, orc, avro, json, csv, text.
Please note that the hierarchy of directories used in examples below are:
{% highlight text %}
dir1/ ├── dir2/ │ └── file2.parquet (schema: <file: string>, content: “file2.parquet”) └── file1.parquet (schema: <file, string>, content: “file1.parquet”) └── file3.json (schema: <file, string>, content: “{‘file’:‘corrupt.json’}”)
{% endhighlight %}
Spark allows you to use spark.sql.files.ignoreCorruptFiles
to ignore corrupt files while reading data from files. When set to true, the Spark jobs will continue to run when encountering corrupted files and the contents that have been read will still be returned.
To ignore corrupt files while reading data files, you can use:
Spark allows you to use spark.sql.files.ignoreMissingFiles
to ignore missing files while reading data from files. Here, missing file really means the deleted file under directory after you construct the DataFrame
. When set to true, the Spark jobs will continue to run when encountering missing files and the contents that have been read will still be returned.
pathGlobFilter
is used to only include files with file names matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of partition discovery.
To load files with paths matching a given glob pattern while keeping the behavior of partition discovery, you can use:
recursiveFileLookup
is used to recursively load files and it disables partition inferring. Its default value is false
. If data source explicitly specifies the partitionSpec
when recursiveFileLookup
is true, exception will be thrown.
To load all files recursively, you can use:
modifiedBefore
and modifiedAfter
are options that can be applied together or separately in order to achieve greater granularity over which files may load during a Spark batch query. (Note that Structured Streaming file sources don't support these options.)
modifiedBefore
: an optional timestamp to only include files with modification times occurring before the specified time. The provided timestamp must be in the following format: YYYY-MM-DDTHH:mm:ss (e.g. 2020-06-01T13:00:00)modifiedAfter
: an optional timestamp to only include files with modification times occurring after the specified time. The provided timestamp must be in the following format: YYYY-MM-DDTHH:mm:ss (e.g. 2020-06-01T13:00:00)When a timezone option is not provided, the timestamps will be interpreted according to the Spark session timezone (spark.sql.session.timeZone
).
To load files with paths matching a given modified time range, you can use: