|author||Olaf Flebbe <email@example.com>||Mon Jan 30 21:49:01 2017 +0100|
|committer||Olaf Flebbe <firstname.lastname@example.org>||Tue Jan 31 19:56:12 2017 +0100|
BIGTOP-2679: Streamline CI Jobs
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
...is a project for the development of packaging and tests of the Apache Hadoop ecosystem.
The primary goal of Apache Bigtop is to build a community around the packaging and interoperability testing of Apache Hadoop-related projects. This includes testing at various levels (packaging, platform, runtime, upgrade, etc...) developed by a community with a focus on the system as a whole, rather than individual projects.
The simplest way to get a feel for how bigtop works, is to just cd into
bigtop-deploy/vm and try out the recipes under vagrant-puppet-vm, vagrant-puppet-docker, and so on. Each one rapidly spins up, and runs the bigtop smoke tests on, a local bigtop based big data distribution. Once you get the gist, you can hack around with the recipes to learn how the puppet/rpm/smoke-tests all work together, going deeper into the components you are interested in as described below.
Also, there is a new project underway, Apache Bigtop blueprints, which aims to create templates/examples that demonstrate/compare various Apache Hadoop ecosystem components with one another.
There are lots of ways to contribute. People with different expertise can help with various subprojects:
Also, opening JIRA's and getting started by posting on the mailing list is helpful.
Bigtop supports Commit-Then-Review model of development. The following rules will be used for the CTR process:
You can go to the Apache Bigtop website for notes on how to do “common” tasks like:
Below are some recipes for getting started with using Apache Bigtop. As Apache Bigtop has different subprojects, these recipes will continue to evolve.
For specific questions it's always a good idea to ping the mailing list at email@example.com to get some immediate feedback, or open a JIRA.
The simplest way to test bigtop is described in bigtop-tests/smoke-tests/README file
For integration (API level) testing with maven, read on.
WARNING: since testing packages requires installing them on a live system it is highly recommended to use VMs for that. Testing Apache Bigtop is done using iTest framework. The tests are organized in maven submodules, with one submodule per Apache Bigtop component. The bigtop-tests/test-execution/smokes/pom.xml defines all submodules to be tested, and each submodule is in its own directory under smokes/, for example:
smokes/hadoop/pom.xml smokes/hive/pom.xml ... and so on.
New way (with Gradle build in place)
Step 1: install smoke tests for one or more components
Example 2: Installing just Hadoop-specific smoke tests
Step 2: Run the the smoke tests on your cluster (see Step 3 and/or Step 4 below)
We are on the route of migrating subprojects under top-level gradle build. Currently converted projects could be listed by running
To see the list of tasks in a subproject, ie itest-common, you can run
Step 1: Build the smokes with snapshots. This ensures that all transitive dependencies etc.. are in your repo
mvn clean install -DskipTests -DskipITs -DperformRelease -f ./bigtop-test-framework/pom.xml mvn clean install -DskipTests -DskipITs -DperformRelease -f ./test-artifacts/pom.xml
Step 2: Now, rebuild in “offline” mode. This will make sure that your local changes to bigtop are embeded in the changes.
mvn clean install -DskipTests -DskipITs -DperformRelease -o -nsu -f ./bigtop-test-framework/pom.xml mvn clean install -DskipTests -DskipITs -DperformRelease -o -nsu -f ./bigtop-tests/test-artifacts/pom.xml
Step 3: Now, you can run the smoke tests on your cluster.
Example 1: Running all the smoke tests with TRACE level logging (shows std out from each mr job).
mvn clean verify -Dorg.apache.bigtop.itest.log4j.level=TRACE -f ./bigtop/bigtop-tests/test-execution/smokes/pom.xml
Just running hadoop examples, nothing else.
mvn clean verify -D'org.apache.maven-failsafe-plugin.testInclude=**/*TestHadoopExamples*' -f bigtop-tests/test-execution/smokes/hadoop/pom.xml
Note: A minor bug/issue: you need the “testInclude” regular expression above, even if you don‘t want to customize the tests, since existing test names don’t follow the maven integration test naming convention of IT*, but instead, follow the surefire (unit test) convention of Test*.
Another common use case for Apache Bigtop is creating / setting up your own Apache Hadoop distribution.
For details on this, check out the bigtop-deploy/README.md file, which describes how to use the puppet repos to create and setup your VMs.
There is a current effort underway to create vagrant/docker recipes as well, which will be contained in the bigtop-deploy/ package.
Packages have been built for CentOS/RHEL 5 and 6, Fedora 18, SuSE Linux Enterprise 11, OpenSUSE12.2, Ubuntu LTS Lucid and Precise, and Ubuntu Quantal. They can probably be built for other platforms as well. Some of the binary artifacts might be compatible with other closely related distributions.
On all systems, Building Apache Bigtop requires certain set of tools
To bootstrap the development environment from scratch execute
This build task expected Puppet 3.x to be installed; user has to have sudo permissions. The task will pull down and install all development dependencies, frameworks and SDKs, required to build the stack on your platform.
Building packages :
If -Dbuildwithdeps=true is set, the Gradle will follow the order of the build specified in the “dependencies” section of bigtop.bom file. Otherwise just a single component will get build (original behavior).
To use an alternative definition of a stack composition (aka BOM), specify its name with -Dbomfile= system property in the build time.
You can visualize all tasks dependencies by running
gradle tasks --all
Building local YUM/APT repositories :
Recommended build environments
Bigtop provides “development in the can” environments, using Docker containers. These have the build tools set by the toolchain, as well as the user and build environment configured and cached. All currently supported OSes could be pulled from official Bigtop repository at https://hub.docker.com/r/bigtop/slaves/tags/
To build a component (bigtop-groovy) for a particular OS (ubuntu-14.04) you can run the following from a clone of Bigtop workspace (assuming your system has Docker engine setup and working)
docker run --rm -u jenkins:jenkins -v `pwd`:/ws --workdir /ws bigtop/slaves:trunk-ubuntu-14.04 bash -l -c './gradlew allclean ; ./gradlew bigtop-groovy-pkg'
The website can be built by running
mvn site:site from the root directory of the project. The main page can be accessed from “project_root/target/site/index.html”.
The source for the website is located in “project_root/src/site/”.
To fetch source from a Git repository you need to modify
bigtop.mk and add the following fields to your package:
_GIT_REPO- SSH, HTTP or local path to Git repo.
_GIT_REF- branch, tag or commit hash to check out.
Some packages have different names for source directory and source tarball (
hbase-0.98.5 directory). By default source will be fetched in a directory named
.t* extension. To explicitly set directory name use the
Example for HBase:
HBASE_GIT_REPO=https://github.com/apache/hbase.git HBASE_GIT_REF=$(HBASE_PKG_VERSION) HBASE_GIT_DIR=hbase-$(HBASE_PKG_VERSION)
You can get in touch with us on the Apache Bigtop mailing lists.