Pirk is written in Java 8 and build and dependency management is accomplished via Apache Maven. Pirk uses Git for change management.
The Pirk code is available via the Pirk git repository and is mirrored to Github.
To check out the code:
git clone http://git.apache.org/incubator-pirk.git/
Then checkout the ‘master’ branch (which should be the default):
git checkout master
Always add the current ASF license header as described here. Please use the provided ‘eclipse-pirk-template.xml’ code template file to automatically add the ASF header to new code.
Please do not use author tags; the code is developed and owned by the community.
Pirk follows coding style practices found in the eclipse-pirk-codestyle.xml file; please ensure that all contributions are formatted accordingly.
Pirk Javadocs may be found [here]({{ site.baseurl }}/javadocs).
Pirk currently follows a simple Maven build with a single level pom.xml. As such, Pirk may be built via ‘mvn package’.
For convenience, the following POM files are included:
Pirk may be built with a specific pom file via ‘mvn package -f <specificPom.xml>’
Pirk uses Jenkins for continuous integration. The build history is available here.
Pirk also uses Travis CI for continuous integration for Github Pull Requests; you can find the build history here.
JUnit in-memory unit and functional testing is performed by building with ‘mvn package’ or running the tests with ‘mvn test’. Specific tests may be run using the Maven command ‘mvn -Dtest= test’.
Distributed functional testing may be performed on a cluster with the desired distributed computing technology installed. Currently, distributed implementations include batch processing in Hadoop MapReduce and Spark with inputs from HDFS or Elasticsearch.
To run all of the distributed functional tests on a cluster, the following ‘hadoop jar’ command may be used:
hadoop jar <pirkJar> org.apache.pirk.test.distributed.DistributedTestDriver -j <full path to pirkJar>
Specific distributed test suites may be run via providing corresponding command line options. The available options are given by the following command:
hadoop jar <pirkJar> org.apache.pirk.test.distributed.DistributedTestDriver --help
The Pirk functional tests using Spark run via utilizing the SparkLauncher via the ‘hadoop jar’ command (not by directly running with ’spark-submit’). To run successfully, the ‘spark.home’ property must be set correctly in the ‘pirk.properties’ file; ’spark-home’ is the directory containing ’bin/spark-submit’.
Pirk uses log4j for logging. The log4j.properties file may be edited to turn allow a ‘debug’ log level.
Pirk includes a benchmarking package leveraging JMH. Currently, [Paillier benchmarking]({{ site.baseurl }}/javadocs/org/apache/pirk/benchmark/PaillierBenchmark) is enabled in Pirk.
To build with benchmarks enabled, use:
mvn package -f pom-with-benchmarks.xml
To run the benchmarks, use:
java -jar target/benchmarks.jar
Optionally, you can reduce the number of times each benchmark is run (default is 10) using the -f flag. For example, to run each benchmark only twice, use: ’java -jar target/benchmarks.jar -f 2’
FYI - Right now this spits out a lot of logging errors as the logger fails to work while benchmarks are running. Ignore the many stack traces and wait for execution to complete to see statistics on the different benchmarks.
Please see [Making Releases]({{ site.baseurl }}/making_releases) and [Verifying Releases]({{ site.baseurl }}/verifying_releases).
Please see the [How to Contribute]({{ site.baseurl }}/how_to_contribute) page.