[LIVY-141][LIVY-175][DOCS] Update javadocs and scaladocs and include in Docs build

[LIVY-141](https://issues.apache.org/jira/browse/LIVY-141) [LIVY-175](https://issues.apache.org/jira/browse/LIVY-175)

Adding javadocs to Livy:
- Add ability to build Livy javadocs
- Update current javadoc comments to address build errors and warnings
- Add more javadoc comments to fully describe API
- Include public API javadocs and scaladocs in Livy Documentation

Noted Remaining Issues:
- Since only the public javadocs are build with the Livy Docs not all javadoc warnings in other modules have been addressed.
- There are still some warnings in the Scala API scaladocs build due to the docs not linking to an external lib for referenced scala lib classes, this does not break the build in any way.
- Scaladocs is not supported for Livy as a whole. With the update to Scala 2.11 many scaladoc warnings were upgraded to error and Livy fundamentally can't fix them.

Author: Alex Bozarth <ajbozart@us.ibm.com>

Closes #38 from ajbozarth/javadoc.

(cherry picked from commit fc22da91948bbf3d0629b4f74722e21a8687288d)
Signed-off-by: Alex Bozarth <ajbozart@us.ibm.com>
18 files changed
tree: 03bca56faa1372e33140450c42a6db0a737d749f
  1. .github/
  2. api/
  3. assembly/
  4. bin/
  5. client-common/
  6. client-http/
  7. conf/
  8. core/
  9. coverage/
  10. dev/
  11. docs/
  12. examples/
  13. integration-test/
  14. python-api/
  15. repl/
  16. rsc/
  17. scala/
  18. scala-api/
  19. server/
  20. test-lib/
  21. .gitignore
  22. .rat-excludes
  23. .travis.yml
  24. checkstyle-suppressions.xml
  25. checkstyle.xml
  26. DISCLAIMER
  27. LICENSE
  28. NOTICE
  29. pom.xml
  30. README.md
  31. scalastyle.xml
README.md

Apache Livy

Build Status

Apache Livy is an open source REST interface for interacting with Apache Spark from anywhere. It supports executing snippets of code or programs in a Spark context that runs locally or in Apache Hadoop YARN.

  • Interactive Scala, Python and R shells
  • Batch submissions in Scala, Java, Python
  • Multiple users can share the same server (impersonation support)
  • Can be used for submitting jobs from anywhere with REST
  • Does not require any code change to your programs

Pull requests are welcomed! But before you begin, please check out the Contributing section on the Community page of our website.

Online Documentation

Guides and documentation on getting started using Livy, example code snippets, and Livy API documentation can be found at livy.incubator.apache.org.

Before Building Livy

To build Livy, you will need:

Debian/Ubuntu:

  • mvn (from maven package or maven3 tarball)
  • openjdk-7-jdk (or Oracle Java7 jdk)
  • Python 2.6+
  • R 3.x

Redhat/CentOS:

  • mvn (from maven package or maven3 tarball)
  • java-1.7.0-openjdk (or Oracle Java7 jdk)
  • Python 2.6+
  • R 3.x

MacOS:

  • Xcode command line tools
  • Oracle's JDK 1.7+
  • Maven (Homebrew)
  • Python 2.6+
  • R 3.x

Required python packages for building Livy:

  • cloudpickle
  • requests
  • requests-kerberos
  • flake8
  • flaky
  • pytest

To run Livy, you will also need a Spark installation. You can get Spark releases at https://spark.apache.org/downloads.html.

Livy requires at least Spark 1.6 and supports both Scala 2.10 and 2.11 builds of Spark, Livy will automatically pick repl dependencies through detecting the Scala version of Spark.

Livy also supports Spark 2.0+ for both interactive and batch submission, you could seamlessly switch to different versions of Spark through SPARK_HOME configuration, without needing to rebuild Livy.

Building Livy

Livy is built using Apache Maven. To check out and build Livy, run:

git clone https://github.com/apache/incubator-livy.git
cd livy
mvn package

By default Livy is built against Apache Spark 1.6.2, but the version of Spark used when running Livy does not need to match the version used to build Livy. Livy internally uses reflection to mitigate the gaps between different Spark versions, also Livy package itself does not contain a Spark distribution, so it will work with any supported version of Spark (Spark 1.6+) without needing to rebuild against specific version of Spark.