This file defines the release pipeline for a particular release channel. It defines a function get_pipeline(mxnet_variant)
, which returns a closure with the pipeline to be executed. For instance:
def get_pipeline(mxnet_variant) { return { stage("${mxnet_variant}") { stage("Build") { timeout(time: max_time, unit: 'MINUTES') { build(mxnet_variant) } } stage("Test") { timeout(time: max_time, unit: 'MINUTES') { test(mxnet_variant) } } stage("Publish") { timeout(time: max_time, unit: 'MINUTES') { publish(mxnet_variant) } } } } } def build(mxnet_variant) { node(UBUNTU_CPU) { ... } } ...
The “first mile” of the CD process is posting the mxnet binaries to the artifact repository. Once this step is complete, the pipelines for the different release channels (PyPI, Maven, etc.) can begin from the compiled binary, and focus solely on packaging it, testing the package, and posting it to the particular distribution channel.
cd
which represents your release channel, e.g. python/pypi
.Jenkins_pipeline.groovy
there with a get_pipeline(mxnet_variant)
function that describes your pipeline.We shouldn't set global timeouts for the pipelines. Rather, the step
being executed should be rapped with a timeout
function (as in the pipeline example above). The max_time
is a global variable set at the release job level.
Ensure that either your steps, or the whole pipeline are wrapped in a node
call. The jobs execute in an utility
node. If you don't wrap your pipeline, or its individual steps, in a node
call, this will lead to problems.
Examples of the two approaches:
Whole pipeline
The release pipeline is executed on a single node, depending on the variant building released. This approach is fine, as long as the stages that don't need specialized hardware (e.g. compilation, packaging, publishing), are short lived.
def get_pipeline(mxnet_variant) { def node_type = mxnet_variant.startsWith('cu') ? NODE_LINUX_GPU : NODE_LINUX_CPU return { node (node_type) { stage("${mxnet_variant}") { stage("Build") { ... } stage("Test") { ... } ... } } } }
Examples:
Per step
Use this approach in cases where you have long running stages that don't depend on specialized/expensive hardware.
def get_pipeline(mxnet_variant) { return { stage("${mxnet_variant}") { stage("Build") { ... } ... } } } def build(mxnet_variant) { node(UBUNTU_CPU) { ... } } def test(mxnet_variant) { def node_type = mxnet_variant.startsWith('cu') ? NODE_LINUX_GPU : NODE_LINUX_CPU node(node_type) { ... } }
Examples:
Both the statically linked libmxnet and dynamically linked libmxnet pipelines have long running compilation and testing stages that do not require specialized/expensive hardware (e.g. GPUs). Therefore, as much as possible, it is important to run each stage in on its own node, and design the pipeline to spend the least amount of time possible on expensive hardware. E.g. for GPU builds, only run GPU tests on GPU instances, all other stages can be executed on CPU nodes.