[LIVY-475] Support of Hadoop CredentialProvider API

In this PR I've added following option to livy.conf:
```
```
to allow to specify path to Hadoop Credential Provider, which than used in `WebServer.scala` to set Keystore password and Key password to JKS that used to enable SSL encryption.

Also before trying to use Hadoop CredentialProvider API I'm checking if it available (as it was not available in Hadoop < 2.6) using the same [method that used in Oozie](https://github.com/apache/oozie/commit/6a731f9926158da38d1e3b518671ada95a544fe8#diff-800f95e605f21c5aaf5edef13039c9b9R124).

To use this, you will need to generate Credential Provider containing "livy.keystore.password" and/or "livy.key-password" in the common way:
```bash
hadoop credential create "livy.keystore.password" -value "keystore_secret" -provider jceks://hdfsnn1.example.com/my/path/livy_creds.jceks
hadoop credential create "livy.key-password" -value "key_secret" -provider jceks://hdfsnn1.example.com/my/path/livy_creds.jceks
```

Author: Ivan Dzikovsky <idzikovsky@protonmail.com>

Closes #99 from idzikovsky/LIVY-475.

Change-Id: Iae60550900067e61a37a6f74acb5e299005f3397
3 files changed
tree: b4c0b4922b8cf0c2cb6230ed86c4fbda76b8bb54
  1. .github/
  2. api/
  3. assembly/
  4. bin/
  5. client-common/
  6. client-http/
  7. conf/
  8. core/
  9. coverage/
  10. dev/
  11. docs/
  12. examples/
  13. integration-test/
  14. python-api/
  15. repl/
  16. rsc/
  17. scala/
  18. scala-api/
  19. server/
  20. test-lib/
  21. .gitignore
  22. .rat-excludes
  23. .travis.yml
  24. checkstyle-suppressions.xml
  25. checkstyle.xml
  26. DISCLAIMER
  27. LICENSE
  28. NOTICE
  29. pom.xml
  30. README.md
  31. scalastyle.xml
README.md

Apache Livy

Build Status

Apache Livy is an open source REST interface for interacting with Apache Spark from anywhere. It supports executing snippets of code or programs in a Spark context that runs locally or in Apache Hadoop YARN.

  • Interactive Scala, Python and R shells
  • Batch submissions in Scala, Java, Python
  • Multiple users can share the same server (impersonation support)
  • Can be used for submitting jobs from anywhere with REST
  • Does not require any code change to your programs

Pull requests are welcomed! But before you begin, please check out the Contributing section on the Community page of our website.

Online Documentation

Guides and documentation on getting started using Livy, example code snippets, and Livy API documentation can be found at livy.incubator.apache.org.

Before Building Livy

To build Livy, you will need:

Debian/Ubuntu:

  • mvn (from maven package or maven3 tarball)
  • openjdk-7-jdk (or Oracle Java7 jdk)
  • Python 2.6+
  • R 3.x

Redhat/CentOS:

  • mvn (from maven package or maven3 tarball)
  • java-1.7.0-openjdk (or Oracle Java7 jdk)
  • Python 2.6+
  • R 3.x

MacOS:

  • Xcode command line tools
  • Oracle's JDK 1.7+
  • Maven (Homebrew)
  • Python 2.6+
  • R 3.x

Required python packages for building Livy:

  • cloudpickle
  • requests
  • requests-kerberos
  • flake8
  • flaky
  • pytest

To run Livy, you will also need a Spark installation. You can get Spark releases at https://spark.apache.org/downloads.html.

Livy requires at least Spark 1.6 and supports both Scala 2.10 and 2.11 builds of Spark, Livy will automatically pick repl dependencies through detecting the Scala version of Spark.

Livy also supports Spark 2.0+ for both interactive and batch submission, you could seamlessly switch to different versions of Spark through SPARK_HOME configuration, without needing to rebuild Livy.

Building Livy

Livy is built using Apache Maven. To check out and build Livy, run:

git clone https://github.com/apache/incubator-livy.git
cd livy
mvn package

By default Livy is built against Apache Spark 1.6.2, but the version of Spark used when running Livy does not need to match the version used to build Livy. Livy internally uses reflection to mitigate the gaps between different Spark versions, also Livy package itself does not contain a Spark distribution, so it will work with any supported version of Spark (Spark 1.6+) without needing to rebuild against specific version of Spark.