commit | 410d897e6ebdc0468c5d3f7ed42295b7f6f796ab | [log] [tgz] |
---|---|---|
author | Nick Vatamaniuc <vatamane@gmail.com> | Fri Feb 24 18:39:03 2023 -0500 |
committer | Nick Vatamaniuc <vatamane@gmail.com> | Fri Feb 24 18:39:03 2023 -0500 |
tree | 48944d5b91036d08623e15036b616e912f6349b7 | |
parent | b44166803c8cbb61103bfc45ca63754929f482c5 [diff] |
Add IBM donated s390x Z Linux instance to the CI pool Since used a different username ended up parameterizing that as well.
This repository contains Ansible scripts for managing our VM testing infrastructure.
$ python3 -m venv venv $ source venv/bin/activate $ pip install -r requirements.txt
On BigSur Mac may have to do:
$ env LDFLAGS="-L$(brew --prefix openssl@1.1)/lib" CFLAGS="-I$(brew --prefix openssl@1.1)/include" pip install -r requirements.txt
The basic steps to provisioning a new Jenkins agent node are:
./tools/gen-config
ansible-vault
host_vars/hostname.yml
fileansible-playbook ci_agents.yml
Node names should follow this pattern:
couchdb-worker-$arch-$osname-$zone-$node_id
I.e.:
couchdb-worker-x86-64-debian-dal-1-01
There should be a single bastion VM setup for each subnet. We just use the cheapest cx2-2x4 instance for these nodes so that we can jump to the other hosts.
Provisioning a bastion VM is much the same as for a ci_agent though should happen much more rarely. Currently the assumption is that each subnet has exactly one bastion. The ./tools/gen-config
script will complain if this assumption is violated so it should be obvious if we get this wrong. It will also complain if we have a subnet that is missing a bastion box.
The steps for provisioning a new bastion box are:
./tools/gen-config
ssh.cfg
in ~/.ssh/config
fileansible-playbook bastions.yml
Bastion names should follow this pattern:
couchdb-bastion-$arch-$osname-$zone-$node_id
I.e.,
couchdb-bastion-x86-64-debian-dal-1-01
./tools/gen-config
Create a ~/.couchdb-infra-cm.cfg
file that contains the following options:
[ibmcloud.<environment>] api_key = <REDACTED> api_url = https://us-south.iaas.cloud.ibm.com/v1 crn = crn:v1:... instance_id = 123-abc... [extra.<instancename>] user = linux1 ip_addr = x.y.z.w arch = s390x num_cpus = 4 ram = 8
<environment>
is a tag used to differentiate multiple environments. It allows fetching instances from more than one IBM Cloud accounts. If api_url
is provided, it will be used to fetch VPC instances. By default is uses "https://us-south.iaas.cloud.ibm.com/v1"
. The crn
field will be added as a CRN: <crn>
header if provided. instance_id
is used only by the power
environment. (See Power Instances
section for more details).
extra.<instancename>
can be an extra unmanaged manually added instance which is not discoverable via cloud.ibm.com with an API key.
The tools/gen-config
script can then be used to generate our production
inventory and ssh.cfg
configuration:
$ ./tools/gen-config
This script requires access to the https://cloud.ibm.com
account that hosts the VMs so not everyone will be able to run this script. However this is only important when provisioning new nodes. Modifying ansible scripts and apply changes to existing nodes can be done by any CouchDB PMC member that's been added to the CI nodes via this repository.
$ ansible-playbook bastions.yml $ ansible-playbook ci_agents.yml
% ansible -i production ci_agents -a "sudo sv restart jenkins" % ansible -v -i production ci_agents -a "sudo apt list --upgradable" % ansible -v -i production ci_agents -a "sudo unattended-upgrade -v"
(Assuming the generated ssh.cfg
was included in ~/.ssh/config
)
If you want to ssh directly to a node, you can do:
$ ssh $hostname
I.e.,
$ ssh couchdb-worker-x86-64-debian-dal-1-01