{% include JB/setup %}
Hadoop File System is a distributed, fault tolerant file system part of the hadoop project and is often used as storage for distributed processing engines like Hadoop MapReduce and Apache Spark or underlying file systems like Alluxio.
Tip : Use ( Ctrl + . ) for autocompletion.
In a notebook, to enable the HDFS interpreter, click the Gear icon and select HDFS.
You can confirm that you're able to access the WebHDFS API by running a curl command against the WebHDFS end point provided to the interpreter.
Here is an example:
$> curl "http://localhost:50070/webhdfs/v1/?op=LISTSTATUS"