Switch to having agents dial into Jenkins

This changes the configuration of Jenkins agents to dial into the
Jenkins master instead of having Jenkins SSH to each node. This allows
us to expand our private cloud worker pool much further.
diff --git a/README.md b/README.md
index 5094d0b..c1321b1 100644
--- a/README.md
+++ b/README.md
@@ -10,6 +10,27 @@
     $ source venv/bin/activate
     $ pip install -r requirements.txt
 
+Provisioning VMs
+---
+
+Our main workhorse is the cx2-4x8 instance type. There are also a few
+ppc64le nodes for doing full builds as well. Whoever provisions a VM should
+make sure to generate a new inventory as well as perform the first Ansible
+run against the new node so that other CouchDB infra members will have access.
+
+
+Bastion VMs
+---
+
+There should be a single bastion VM setup for each subnet. We just use the
+cheapest cx2-2x4 instance for these nodes so that we can jump to the other
+hosts.
+
+If the bastion changes public IP addresses we have to update `group_vars/ci_agents.yml`
+and set the `ansible_ssh_common_args` to use the new public IP for contacting
+servers. We should also update `ssh.cfg` in this repository to make it easier
+for contacting servers manually.
+
 
 Generating Inventory Listings
 ---
@@ -24,23 +45,28 @@
 
     $ ./tools/gen-inventory > production
 
-Setting up CI workers for Jenkins
----
-
-Once the a new VM has been added into the `production` inventory whoever
-provisioned the VM will need to execute the first Ansible run so that
-the CouchDB infra group has access (where infra group is defined as
-the list of GitHub users in `roles/common/tasks/main.yml`).
-
-    $ ansible-playbook -i production ci_agents.yml
-
-Once this playbook finishes the new VM should be configured to be usable
-as a Jenkins agent.
-
 
 Configuring Jenkins
 ---
 
-Once Ansible has run against a new VM configuring it as an agent in
-Jenkins is fairly straightforward. You can just copy an existing node's
-configuration and update the SSH host IP address.
\ No newline at end of file
+Once a CI worker has been provisioned we must also configure Jenkins to have
+the JAR url and secret ready. The easiest approach here is to just copy the
+existing configuration from one of the existing nodes. When viewing the
+conifguration page we then dump the secret value into an encrypted vault
+file in the `host_vars` directory.
+
+
+Running Ansible
+---
+
+    $ ansible-playbook -i production ci_agents.yml
+
+
+Useful Commands:
+---
+
+If you want to ssh directly to a node, you can do:
+
+```bash
+$ ssh -F ssh.cfg $private_ip
+```
diff --git a/ansible.cfg b/ansible.cfg
index 472e641..d5122a1 100644
--- a/ansible.cfg
+++ b/ansible.cfg
@@ -1,3 +1,7 @@
 [defaults]
 inventory = ./production
 vault_password_file = ~/.couchdb-ansible-vault
+
+[ssh_connection]
+ssh_args = -F ./ssh.cfg
+control_path = ~/.ssh/ansible-%%r@%%h:%%p
diff --git a/bastions.yml b/bastions.yml
new file mode 100644
index 0000000..dbaed03
--- /dev/null
+++ b/bastions.yml
@@ -0,0 +1,4 @@
+---
+- hosts: bastions
+  roles:
+    - common
diff --git a/group_vars/ci_agents.yaml b/group_vars/ci_agents.yaml
deleted file mode 100644
index 7067c33..0000000
--- a/group_vars/ci_agents.yaml
+++ /dev/null
@@ -1,2 +0,0 @@
-ansible_ssh_user: root
-ansible_ssh_common_args: -o StrictHostKeyChecking=no
\ No newline at end of file
diff --git a/host_vars/couchdb-worker-x86-64-debian-dal-1-01.yml b/host_vars/couchdb-worker-x86-64-debian-dal-1-01.yml
new file mode 100644
index 0000000..82ee097
--- /dev/null
+++ b/host_vars/couchdb-worker-x86-64-debian-dal-1-01.yml
@@ -0,0 +1,10 @@
+jenkins_secret: !vault |
+          $ANSIBLE_VAULT;1.1;AES256
+          37663466623637393365323764643632623833343333623565356563633865383833386562333335
+          3235333066323630353139396166303538313861386466340a636361633833353636653938646133
+          31366532333266326538396134313631356262646535363266646663343063653831386135653861
+          3535373237653431370a373732373836383738616461383931313834323434323637386465363764
+          63666565353964643835326337626538643366616461376566623433633836313932613264383762
+          36636535353535613765613031316563353539646339333234666337396536643963333930326634
+          39313162643063633137626534363865613531363832376437393036633761656662383436626632
+          34623663316438313438
diff --git a/host_vars/couchdb-worker-x86-64-debian-dal-1-02.yml b/host_vars/couchdb-worker-x86-64-debian-dal-1-02.yml
new file mode 100644
index 0000000..ce6888d
--- /dev/null
+++ b/host_vars/couchdb-worker-x86-64-debian-dal-1-02.yml
@@ -0,0 +1,10 @@
+jenkins_secret: !vault |
+          $ANSIBLE_VAULT;1.1;AES256
+          63386161353464336539663432383362303566633139383339383765346232383365663434633931
+          6163373532303733663665653734346636616539393030340a346133666539333835316266326437
+          37383332333232303738306262346433613835393466363638343462316464323065616330623235
+          6561623765343837350a666233623934323439313163383532663833396533333536633637383932
+          65343139323961646339633330376130353361396239623035636435633966383262313764613263
+          33323533333630383130343239343135393433663038626164616166616530363363383632383430
+          39663537326637373565306433333536666464633762616361353331343238616631343332363232
+          38653532333566316435
diff --git a/host_vars/couchdb-worker-x86-64-debian-dal-1-03.yml b/host_vars/couchdb-worker-x86-64-debian-dal-1-03.yml
new file mode 100644
index 0000000..1381399
--- /dev/null
+++ b/host_vars/couchdb-worker-x86-64-debian-dal-1-03.yml
@@ -0,0 +1,10 @@
+jenkins_secret: !vault |
+          $ANSIBLE_VAULT;1.1;AES256
+          33613736333739616439353932653062383463323761333062376266366166376132336238356633
+          6463373365316234343337386636306262636361366561610a626232303038663230336138343339
+          30633031356132356361386662386238363637643632356463393630383664366637323830326132
+          3635393861336466310a383135633862343062366534653234636435303537613437663861663533
+          32356332656335663936393733356637623434393838653964363764666239323432366238333632
+          38633830376366373439613665396362363466393333643465643062626466383966303238366264
+          35636138633764386265343239393433356131353831636137643638626364363366303964376230
+          66653836306163643433
diff --git a/host_vars/couchdb-worker-x86-64-debian-dal-1-04.yml b/host_vars/couchdb-worker-x86-64-debian-dal-1-04.yml
new file mode 100644
index 0000000..cb4092e
--- /dev/null
+++ b/host_vars/couchdb-worker-x86-64-debian-dal-1-04.yml
@@ -0,0 +1,10 @@
+jenkins_secret: !vault |
+          $ANSIBLE_VAULT;1.1;AES256
+          36356636373264306338636464643163323564366166623261626361623933646433343461333665
+          3135386161323639613233633831653834393234626261380a636536613939663538613032373961
+          34303266623332653962373737663535366233616661313431633733626161653036666464376534
+          3632383539323139380a666362323232313838626131356265393936383931666663643730343864
+          36343238653530373533643561323033396435366134343433623233343539346366353465373830
+          37623765336335646135616236343634376135363633623330636664373335343833376337633739
+          61343330396430383862623664383431363538373933626635646330663135666434333739646234
+          35623531646134656462
diff --git a/host_vars/couchdb-worker-x86-64-debian-dal-1-05.yml b/host_vars/couchdb-worker-x86-64-debian-dal-1-05.yml
new file mode 100644
index 0000000..ab76c3c
--- /dev/null
+++ b/host_vars/couchdb-worker-x86-64-debian-dal-1-05.yml
@@ -0,0 +1,10 @@
+jenkins_secret: !vault |
+          $ANSIBLE_VAULT;1.1;AES256
+          38393833646633616461656532363639376231356432373037363762316131376131373437313138
+          3664653632663735306130643832663030666234376462630a643263316465653430316437313763
+          38363538626166643230653366306664373734623266393633306162663066376532663034376439
+          3265376462646530340a666137633664306638333162613631343266376630653766333031653338
+          64666364386433303266313733333734373233396431663335303864343135663738303532353163
+          33373064353165346538393365363964376434653461303739656461646266636333613362303237
+          64313463393532306237383437346563383732653363613562666135643261303730343334616132
+          34643930653932363330
diff --git a/host_vars/couchdb-worker-x86-64-debian-dal-1-06.yml b/host_vars/couchdb-worker-x86-64-debian-dal-1-06.yml
new file mode 100644
index 0000000..38a859f
--- /dev/null
+++ b/host_vars/couchdb-worker-x86-64-debian-dal-1-06.yml
@@ -0,0 +1,10 @@
+jenkins_secret: !vault |
+          $ANSIBLE_VAULT;1.1;AES256
+          33653535663863366366633432313234663434326566353066663639363064396631626239356636
+          6231356135353539326430643135316436633732366537390a303934313938336236303164653236
+          34356534356333363930613262353661643037653333333064303332353931356437613438353065
+          6363663832323265390a623662633639666236663335353364356430386136623832623361343437
+          63303738306230326534663736303935373433656663336433343662306439383864643536323039
+          62643762353830383738316431326433633136373462653063636234653932326438633934363135
+          66376131666635623731633539303563626537313266326634633663343634323432613564623230
+          64643335373032326138
diff --git a/host_vars/couchdb-worker-x86-64-debian-dal-1-07.yml b/host_vars/couchdb-worker-x86-64-debian-dal-1-07.yml
new file mode 100644
index 0000000..375e9b5
--- /dev/null
+++ b/host_vars/couchdb-worker-x86-64-debian-dal-1-07.yml
@@ -0,0 +1,10 @@
+jenkins_secret: !vault |
+          $ANSIBLE_VAULT;1.1;AES256
+          35336233376664386561303534616466363739653266393831313633323662633463353339376533
+          3735386233313837303061373933663038333938636237380a333765393933653562383564613233
+          32643430653730353261623334386463643832303436636130613562333566383137656331316165
+          3530343765396433360a666662643566376336396661646566633034363938386538343165323633
+          31656664646534626133336637643736316639623833613130613330356136656562643934316439
+          35303733376235623238666534636232383231343433366330633233343339353366646130663061
+          39316239366335636235393130663531643363333639383337316461636232663462323163303533
+          62386163623762313262
diff --git a/host_vars/couchdb-worker-x86-64-debian-dal-1-08.yml b/host_vars/couchdb-worker-x86-64-debian-dal-1-08.yml
new file mode 100644
index 0000000..e4dbff5
--- /dev/null
+++ b/host_vars/couchdb-worker-x86-64-debian-dal-1-08.yml
@@ -0,0 +1,10 @@
+jenkins_secret: !vault |
+          $ANSIBLE_VAULT;1.1;AES256
+          64326134643565656264336363396631363535623036366630336532343836306236326536363163
+          3363663263653036396331646133363866616331303466660a333763333938313265343763643064
+          36656234636566353430386666643931393634363965373836613439346231376662323537613131
+          6430343661666639370a663264323034353561303134366364613364623064356536386433323462
+          31333964393832356134643932376661393732356566353466616563383730643362343931633837
+          63393037313230326463363938623762626662346330346534366634386639336134396361663364
+          64336364306433303961396630333733353738633366343532383866323334363433373231633834
+          64313438393065383131
diff --git a/production b/production
index 31efd3c..9bc042c 100644
--- a/production
+++ b/production
@@ -1,100 +1,185 @@
 all:
   children:
-    ci_agents:
+    bastions:
       hosts:
-        169.48.153.210:
+        couchdb-bastion-x86-64-debian-dal-1-1:
           boot_volume:
-            device: 0717_5afac964-7ec6-4dad-a84d-b09b4d992949-vgqqr
-            name: couchdb-ci-worker-dal-1-2-boot
+            device: 0717-24ba0f68-404a-4f68-82c8-0e885fc3e759-nx629
+            name: couchdb-ci-bastion-dal-1-1-boot
           instance:
-            created_at: '2019-12-11T16:51:02Z'
-            id: 0717_d97c67df-1f04-41f8-9461-9b1d5721e408
-            name: couchdb-ci-worker-dal-1-2
-            profile: cx2-4x8
+            created_at: '2020-01-07T18:38:33Z'
+            id: 0717_5ecb1169-95ac-465b-a505-d172093972d1
+            name: couchdb-bastion-x86-64-debian-dal-1-1
+            profile: cx2-2x4
             subnet: couchdb-ci-farm-dal-1
             vpc: couchdb-ci-farm-vpc
             zone: us-south-1
           ip_addrs:
-            private: 10.240.0.5
-            public: 169.48.153.210
+            private: 10.240.0.11
+            public: 169.48.153.153
           system:
             arch: amd64
-            num_cpus: 4
-            ram: 8
-        169.48.153.7:
+            num_cpus: 2
+            ram: 4
+    ci_agents:
+      hosts:
+        couchdb-worker-x86-64-debian-dal-1-01:
           boot_volume:
             device: 0717_72564344-27ce-4e79-91d8-aacfaba35421-vv2gd
             name: couchdb-ci-worker-dal-1-1-boot
           instance:
             created_at: '2019-12-11T16:50:33Z'
             id: 0717_4d64226a-ffad-4523-b5b3-78769a1d0bbe
-            name: couchdb-ci-worker-dal-1-1
+            name: couchdb-worker-x86-64-debian-dal-1-01
             profile: cx2-4x8
             subnet: couchdb-ci-farm-dal-1
             vpc: couchdb-ci-farm-vpc
             zone: us-south-1
           ip_addrs:
+            bastion: 169.48.153.153
             private: 10.240.0.4
-            public: 169.48.153.7
+            public: null
           system:
             arch: amd64
             num_cpus: 4
             ram: 8
-        169.48.154.118:
+        couchdb-worker-x86-64-debian-dal-1-02:
           boot_volume:
-            device: 0717_4abf905c-b565-4537-a4f3-b9e365d945ed-tbfg5
-            name: couchdb-ci-worker-dal-1-4-boot
+            device: 0717_5afac964-7ec6-4dad-a84d-b09b4d992949-vgqqr
+            name: couchdb-ci-worker-dal-1-2-boot
           instance:
-            created_at: '2019-12-11T16:51:39Z'
-            id: 0717_c4b21ff3-96e9-45a5-a77c-a90d6ac723dc
-            name: couchdb-ci-worker-dal-1-4
+            created_at: '2019-12-11T16:51:02Z'
+            id: 0717_d97c67df-1f04-41f8-9461-9b1d5721e408
+            name: couchdb-worker-x86-64-debian-dal-1-02
             profile: cx2-4x8
             subnet: couchdb-ci-farm-dal-1
             vpc: couchdb-ci-farm-vpc
             zone: us-south-1
           ip_addrs:
-            private: 10.240.0.7
-            public: 169.48.154.118
+            bastion: 169.48.153.153
+            private: 10.240.0.5
+            public: null
           system:
             arch: amd64
             num_cpus: 4
             ram: 8
-        169.48.154.14:
+        couchdb-worker-x86-64-debian-dal-1-03:
           boot_volume:
             device: 0717_f51ebb9c-5081-47f0-bbf9-07a1b1ba5e73-nwzzg
             name: couchdb-ci-worker-dal-1-3-boot
           instance:
             created_at: '2019-12-11T16:51:21Z'
             id: 0717_04df61d7-fb30-4251-9f59-7566c93c8a92
-            name: couchdb-ci-worker-dal-1-3
+            name: couchdb-worker-x86-64-debian-dal-1-03
             profile: cx2-4x8
             subnet: couchdb-ci-farm-dal-1
             vpc: couchdb-ci-farm-vpc
             zone: us-south-1
           ip_addrs:
+            bastion: 169.48.153.153
             private: 10.240.0.6
-            public: 169.48.154.14
+            public: null
           system:
             arch: amd64
             num_cpus: 4
             ram: 8
-        169.48.154.35:
+        couchdb-worker-x86-64-debian-dal-1-04:
           boot_volume:
-            device: 0717_1a5c43f9-a22a-4258-9514-13703dfc5fb0-wkn8z
+            device: 0717-cd555806-1455-4329-8f77-d2bbccaa2352-s2zmh
+            name: couchdb-ci-worker-dal-1-4-boot
+          instance:
+            created_at: '2020-01-07T17:53:05Z'
+            id: 0717_e8cb32f9-4861-48be-b22d-2b20d6e23b79
+            name: couchdb-worker-x86-64-debian-dal-1-04
+            profile: cx2-4x8
+            subnet: couchdb-ci-farm-dal-1
+            vpc: couchdb-ci-farm-vpc
+            zone: us-south-1
+          ip_addrs:
+            bastion: 169.48.153.153
+            private: 10.240.0.9
+            public: null
+          system:
+            arch: amd64
+            num_cpus: 4
+            ram: 8
+        couchdb-worker-x86-64-debian-dal-1-05:
+          boot_volume:
+            device: 0717-3de36e3f-40ab-49f6-b757-181f07e0ebf2-2mg2b
             name: couchdb-ci-worker-dal-1-5-boot
           instance:
-            created_at: '2019-12-11T16:51:55Z'
-            id: 0717_e4857481-a79e-4848-a1c5-38e2577f815c
-            name: couchdb-ci-worker-dal-1-5
+            created_at: '2020-01-07T17:53:40Z'
+            id: 0717_37a9351f-99a9-484d-aec5-c0da940c2e29
+            name: couchdb-worker-x86-64-debian-dal-1-05
             profile: cx2-4x8
             subnet: couchdb-ci-farm-dal-1
             vpc: couchdb-ci-farm-vpc
             zone: us-south-1
           ip_addrs:
-            private: 10.240.0.8
-            public: 169.48.154.35
+            bastion: 169.48.153.153
+            private: 10.240.0.10
+            public: null
           system:
             arch: amd64
             num_cpus: 4
             ram: 8
-
+        couchdb-worker-x86-64-debian-dal-1-06:
+          boot_volume:
+            device: 0717-2f6e67ea-d065-4ea0-92cb-5abc75070994-x9ntk
+            name: couchdb-ci-worker-dal-1-6-boot
+          instance:
+            created_at: '2020-01-07T21:03:39Z'
+            id: 0717_001ae386-bf78-4d1b-bde5-9bddd5de9089
+            name: couchdb-worker-x86-64-debian-dal-1-06
+            profile: cx2-4x8
+            subnet: couchdb-ci-farm-dal-1
+            vpc: couchdb-ci-farm-vpc
+            zone: us-south-1
+          ip_addrs:
+            bastion: 169.48.153.153
+            private: 10.240.0.14
+            public: null
+          system:
+            arch: amd64
+            num_cpus: 4
+            ram: 8
+        couchdb-worker-x86-64-debian-dal-1-07:
+          boot_volume:
+            device: 0717-87fed9c8-4f01-4ef3-92fb-67e7b9751a9f-zjjms
+            name: couchdb-ci-worker-dal-1-7-boot
+          instance:
+            created_at: '2020-01-07T21:04:06Z'
+            id: 0717_8455adf5-78bc-466f-ad18-44ce6988576d
+            name: couchdb-worker-x86-64-debian-dal-1-07
+            profile: cx2-4x8
+            subnet: couchdb-ci-farm-dal-1
+            vpc: couchdb-ci-farm-vpc
+            zone: us-south-1
+          ip_addrs:
+            bastion: 169.48.153.153
+            private: 10.240.0.15
+            public: null
+          system:
+            arch: amd64
+            num_cpus: 4
+            ram: 8
+        couchdb-worker-x86-64-debian-dal-1-08:
+          boot_volume:
+            device: 0717-1bde8488-3508-4824-9526-6c2e48c193b0-tfszz
+            name: couchdb-ci-worker-dal-1-8-boot
+          instance:
+            created_at: '2020-01-07T21:04:49Z'
+            id: 0717_e00b3214-e4f7-426e-b644-b40ae1c3fa79
+            name: couchdb-worker-x86-64-debian-dal-1-08
+            profile: cx2-4x8
+            subnet: couchdb-ci-farm-dal-1
+            vpc: couchdb-ci-farm-vpc
+            zone: us-south-1
+          ip_addrs:
+            bastion: 169.48.153.153
+            private: 10.240.0.16
+            public: null
+          system:
+            arch: amd64
+            num_cpus: 4
+            ram: 8
diff --git a/roles/ci_agent/files/runit-logs b/roles/ci_agent/files/runit-logs
new file mode 100644
index 0000000..3195b01
--- /dev/null
+++ b/roles/ci_agent/files/runit-logs
@@ -0,0 +1,2 @@
+#!/bin/sh
+exec chpst svlogd -tt ./main
diff --git a/roles/ci_agent/tasks/main.yml b/roles/ci_agent/tasks/main.yml
index 055160f..9a59ac1 100644
--- a/roles/ci_agent/tasks/main.yml
+++ b/roles/ci_agent/tasks/main.yml
@@ -1,18 +1,15 @@
 - name: Install Docker gpg key for Apt
-  become: yes
   apt_key:
     url: https://download.docker.com/linux/debian/gpg
     state: present
 
 - name: Setup Docker Apt repository
-  become: yes
   apt_repository:
     repo: deb https://download.docker.com/linux/debian {{ ansible_distribution_release }} stable
     filename: docker
     state: present
 
 - name: Install Docker Packages
-  become: yes
   apt:
     name: "{{ packages }}"
   vars:
@@ -22,7 +19,6 @@
       - docker-ce-cli
 
 - name: Install multi-architecture support for Docker
-  become: yes
   apt:
     name: "{{ packages }}"
     state: latest
@@ -33,7 +29,6 @@
       - qemu-user-static
 
 - name: Install Java 8
-  become: yes
   apt:
     name: "{{ packages }}"
     state: latest
@@ -42,13 +37,11 @@
       - openjdk-8-jre-headless
 
 - name: Add group jenkins
-  become: yes
   group:
     name: jenkins
     gid: 910
 
 - name: Add user jenkins
-  become: yes
   user:
     name: jenkins
     uid: 910
@@ -58,29 +51,50 @@
     state: present
     shell: /bin/bash
 
-- name: Add Apache Infra ssh key
-  become: yes
-  authorized_key:
-    user: jenkins
-    key: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAtxkcKDiPh1OaVzaVdc80daKq2sRy8aAgt8u2uEcLClzMrnv/g19db7XVggfT4+HPCqcbFbO3mtVnUnWWtuSEpDjqriWnEcSj2G1P53zsdKEu9qCGLmEFMgwcq8b5plv78PRdAQn09WCBI1QrNMypjxgCKhNNn45WqV4AD8Jp7/8=
-
 - name: Install kill-old-docker.sh
-  become: yes
   copy:
     src: kill-old-docker.sh
     dest: /usr/local/bin/kill-old-docker.sh
     mode: 0755
 
 - name: Add kill-old-docker.sh cron entry
-  become: yes
   cron:
     name: Kill old docker containers
     hour: '19'
     job: /usr/local/bin/kill-old-docker.sh
 
 - name: Add docker prune cron entry
-  become: yes
   cron:
     name: Docker prune
     hour: '19'
     job: /usr/bin/docker system prune -a -f --filter "until=72h"
+
+- name: Create Jenkins runit service directory
+  file:
+    path: /etc/sv/jenkins
+    state: directory
+
+- name: Create Jenkins runit log directory
+  file:
+    path: /etc/sv/jenkins/log/main
+    state: directory
+
+- name : Create Jenkins runit run script
+  template:
+    src: runit-main.j2
+    dest: /etc/sv/jenkins/run
+    mode: 0755
+
+- name: Create Jenkins runit logs run script
+  copy:
+    src: runit-logs
+    dest: /etc/sv/jenkins/log/run
+    mode: 0755
+
+- name: Enable Jenkins runit service
+  file:
+    src: /etc/sv/jenkins
+    dest: /etc/service/jenkins
+    state: link
+
+
diff --git a/roles/ci_agent/templates/runit-main.j2 b/roles/ci_agent/templates/runit-main.j2
new file mode 100644
index 0000000..27e62e5
--- /dev/null
+++ b/roles/ci_agent/templates/runit-main.j2
@@ -0,0 +1,6 @@
+#!/bin/sh
+exec 2>&1
+cd /home/jenkins
+curl https://ci-couchdb.apache.org/jnlpJars/agent.jar --output /home/jenkins/agent.jar
+chown jenkins:jenkins /home/jenkins/agent.jar
+exec chpst -u jenkins:jenkins:docker java -jar agent.jar -jnlpUrl https://ci-couchdb.apache.org/computer/{{ hostvars[inventory_hostname]["instance"]["name"] }}/slave-agent.jnlp -secret {{ jenkins_secret }} -workDir "/home/jenkins"
diff --git a/roles/common/tasks/main.yml b/roles/common/tasks/main.yml
index 35159e1..9b6be4e 100644
--- a/roles/common/tasks/main.yml
+++ b/roles/common/tasks/main.yml
@@ -8,7 +8,6 @@
     - https://github.com/wohali.keys
 
 - name: Install basic ubiquitous packages
-  become: yes
   apt:
     name: "{{ packages }}"
     state: latest
diff --git a/ssh.cfg b/ssh.cfg
new file mode 100644
index 0000000..95da36a
--- /dev/null
+++ b/ssh.cfg
@@ -0,0 +1,57 @@
+Host couchdb-bastion-x86-64-debian-dal-1-1
+  Hostname 169.48.153.153
+  User root
+  ForwardAgent yes
+  StrictHostKeyChecking no
+  ControlMaster auto
+  ControlPath ~/.ssh/ansible-%r@%h:%p
+  ControlPersist 30m
+
+Host couchdb-worker-x86-64-debian-dal-1-08
+  Hostname 10.240.0.16
+  User root
+  StrictHostKeyChecking no
+  ProxyCommand /usr/bin/ssh -W %h:%p -q root@169.48.153.153
+
+Host couchdb-worker-x86-64-debian-dal-1-05
+  Hostname 10.240.0.10
+  User root
+  StrictHostKeyChecking no
+  ProxyCommand /usr/bin/ssh -W %h:%p -q root@169.48.153.153
+
+Host couchdb-worker-x86-64-debian-dal-1-04
+  Hostname 10.240.0.9
+  User root
+  StrictHostKeyChecking no
+  ProxyCommand /usr/bin/ssh -W %h:%p -q root@169.48.153.153
+
+Host couchdb-worker-x86-64-debian-dal-1-07
+  Hostname 10.240.0.15
+  User root
+  StrictHostKeyChecking no
+  ProxyCommand /usr/bin/ssh -W %h:%p -q root@169.48.153.153
+
+Host couchdb-worker-x86-64-debian-dal-1-06
+  Hostname 10.240.0.14
+  User root
+  StrictHostKeyChecking no
+  ProxyCommand /usr/bin/ssh -W %h:%p -q root@169.48.153.153
+
+Host couchdb-worker-x86-64-debian-dal-1-01
+  Hostname 10.240.0.4
+  User root
+  StrictHostKeyChecking no
+  ProxyCommand /usr/bin/ssh -W %h:%p -q root@169.48.153.153
+
+Host couchdb-worker-x86-64-debian-dal-1-03
+  Hostname 10.240.0.6
+  User root
+  StrictHostKeyChecking no
+  ProxyCommand /usr/bin/ssh -W %h:%p -q root@169.48.153.153
+
+Host couchdb-worker-x86-64-debian-dal-1-02
+  Hostname 10.240.0.5
+  User root
+  StrictHostKeyChecking no
+  ProxyCommand /usr/bin/ssh -W %h:%p -q root@169.48.153.153
+
diff --git a/tools/gen-config b/tools/gen-config
new file mode 100755
index 0000000..1f39bd9
--- /dev/null
+++ b/tools/gen-config
@@ -0,0 +1,268 @@
+#!/usr/bin/env python
+
+import argparse as ap
+import ConfigParser as cp
+import json
+import os
+import re
+import textwrap
+
+import requests
+import yaml
+
+
+IBM_CLOUD_URL = "https://us-south.iaas.cloud.ibm.com/v1/"
+IAM_URL = "https://iam.cloud.ibm.com/identity/token"
+
+IBM_CLOUD_GENERATION = "2"
+IBM_CLOUD_VERSION = "2019-08-09"
+
+API_KEY = None
+IAM_TOKEN = None
+SESS = requests.session()
+
+
+def tostr(obj):
+    ret = {}
+    for k, v in obj.items():
+        if isinstance(k, unicode):
+            k = k.encode("utf-8")
+        if isinstance(v, dict):
+            ret[k] = tostr(v)
+        elif isinstance(v, unicode):
+            ret[k] = v.encode("utf-8")
+        else:
+            ret[k] = v
+    return ret
+
+
+def load_api_key():
+    global API_KEY
+    path = os.path.expanduser("~/.couchdb-infra-cm.cfg")
+    if not os.path.exists(path):
+        print "Missing config file: " + path
+        exit(1)
+    parser = cp.SafeConfigParser()
+    parser.read([path])
+    API_KEY = parser.get("ibmcloud", "api_key")
+
+
+def load_iam_token():
+    global IAM_TOKEN
+    headers = {
+        "Accept": "application/json"
+    }
+    data = {
+        "grant_type": "urn:ibm:params:oauth:grant-type:apikey",
+        "apikey": API_KEY
+    }
+    resp = SESS.post(IAM_URL, headers=headers, data=data)
+    resp.raise_for_status()
+    body = resp.json()
+    IAM_TOKEN = body["token_type"] + " " + body["access_token"]
+    SESS.headers["Authorization"] = IAM_TOKEN
+
+
+def init():
+    load_api_key()
+    load_iam_token()
+
+
+def list_instances():
+    url = IBM_CLOUD_URL + "/instances"
+    params = {
+        "version": IBM_CLOUD_VERSION,
+        "generation": IBM_CLOUD_GENERATION,
+        "limit": 100
+    }
+    while url:
+        resp = SESS.get(url, params=params)
+        body = resp.json()
+        for instance in body["instances"]:
+            interface_url = instance["primary_network_interface"]["href"]
+            resp = SESS.get(interface_url, params=params)
+            instance["primary_network_interface"] = resp.json()
+            yield instance
+        url = body.get("next")
+
+
+def load_bastion(bastions, instance):
+    if instance["status"] != "running":
+        return
+
+    name = instance["name"]
+    ip_addr = None
+    net_iface = instance["primary_network_interface"]
+    floating_ips = net_iface.get("floating_ips", [])
+    if not floating_ips:
+        print "Bastion is missing a public IP: %s" % name
+        exit(2)
+    ip_addr = floating_ips[0]["address"]
+
+    bastions[name] = {
+        "instance": {
+            "id": instance["id"],
+            "name": instance["name"],
+            "created_at": instance["created_at"],
+            "profile": instance["profile"]["name"],
+            "vpc": instance["vpc"]["name"],
+            "zone": instance["zone"]["name"],
+            "subnet": net_iface["subnet"]["name"]
+        },
+        "ip_addrs": {
+            "public": ip_addr,
+            "private": net_iface["primary_ipv4_address"]
+        },
+        "boot_volume": {
+            "device": instance["boot_volume_attachment"]["device"]["id"],
+            "name": instance["boot_volume_attachment"]["volume"]["name"]
+        },
+        "system": {
+            "arch": instance["vcpu"]["architecture"],
+            "num_cpus": instance["vcpu"]["count"],
+            "ram": instance["memory"]
+        }
+    }
+
+
+def load_ci_agent(ci_agents, instance):
+    if instance["status"] != "running":
+        return
+
+    name = instance["name"]
+    net_iface = instance["primary_network_interface"]
+
+    ci_agents[name] = {
+        "instance": {
+            "id": instance["id"],
+            "name": instance["name"],
+            "created_at": instance["created_at"],
+            "profile": instance["profile"]["name"],
+            "vpc": instance["vpc"]["name"],
+            "zone": instance["zone"]["name"],
+            "subnet": net_iface["subnet"]["name"]
+        },
+        "ip_addrs": {
+            "bastion": None,
+            "public": None,
+            "private": net_iface["primary_ipv4_address"]
+        },
+        "boot_volume": {
+            "device": instance["boot_volume_attachment"]["device"]["id"],
+            "name": instance["boot_volume_attachment"]["volume"]["name"]
+        },
+        "system": {
+            "arch": instance["vcpu"]["architecture"],
+            "num_cpus": instance["vcpu"]["count"],
+            "ram": instance["memory"]
+        }
+    }
+
+
+def assign_bastions(bastions, ci_agents):
+    subnets = {}
+    for (host, bastion) in bastions.items():
+        subnet = bastion["instance"]["subnet"]
+        ip_addr = bastion["ip_addrs"]["public"]
+        assert subnet not in subnets
+        subnets[subnet] = ip_addr
+    for (host, ci_agent) in ci_agents.items():
+        subnet = ci_agent["instance"]["subnet"]
+        assert subnet in subnets
+        ci_agent["ip_addrs"]["bastion"] = subnets[subnet]
+
+
+def write_inventory(fname, bastions, ci_agents):
+    inventory = {"all": {
+        "children": {
+            "ci_agents": {
+                "hosts": ci_agents
+            },
+            "bastions": {
+                "hosts": bastions
+            }
+        }
+    }}
+
+    with open(fname, "w") as handle:
+        yaml.dump(tostr(inventory), stream=handle, default_flow_style=False)
+
+
+def write_ssh_cfg(filename, bastions, ci_agents):
+    bastion_tmpl = textwrap.dedent("""\
+        Host {host}
+          Hostname {ip_addr}
+          User root
+          ForwardAgent yes
+          StrictHostKeyChecking no
+          ControlMaster auto
+          ControlPath ~/.ssh/ansible-%r@%h:%p
+          ControlPersist 30m
+
+        """)
+    ci_agent_tmpl = textwrap.dedent("""\
+        Host {host}
+          Hostname {ip_addr}
+          User root
+          StrictHostKeyChecking no
+          ProxyCommand /usr/bin/ssh -W %h:%p -q root@{bastion_ip}
+
+        """)
+    with open(filename, "w") as handle:
+        for host, info in bastions.items():
+            args = {
+                "host": host,
+                "ip_addr": info["ip_addrs"]["public"]
+            }
+            entry = bastion_tmpl.format(**args)
+            handle.write(entry)
+        for host, info in ci_agents.items():
+            args = {
+                "host": host,
+                "ip_addr": info["ip_addrs"]["private"],
+                "bastion_ip": info["ip_addrs"]["bastion"]
+            }
+            entry = ci_agent_tmpl.format(**args)
+            handle.write(entry)
+
+
+def parse_args():
+    parser = ap.ArgumentParser(description="Inventory Generation")
+    parser.add_argument(
+            "--inventory",
+            default="production",
+            metavar="FILE",
+            type=str,
+            help="Inventory filename to write"
+        )
+    parser.add_argument(
+            "--ssh-cfg",
+            default="ssh.cfg",
+            metavar="FILE",
+            type=str,
+            help="SSH config filename to write"
+        )
+    return parser.parse_args()
+
+def main():
+    args = parse_args()
+
+    init()
+
+    bastions = {}
+    ci_agents = {}
+
+    for instance in list_instances():
+        if instance["name"].startswith("couchdb-bastion"):
+            load_bastion(bastions, instance)
+        elif instance["name"].startswith("couchdb-worker"):
+            load_ci_agent(ci_agents, instance)
+
+    assign_bastions(bastions, ci_agents)
+
+    write_inventory(args.inventory, bastions, ci_agents)
+    write_ssh_cfg(args.ssh_cfg, bastions, ci_agents)
+
+
+if __name__ == "__main__":
+    main()
diff --git a/tools/gen-inventory b/tools/gen-inventory
deleted file mode 100755
index 95e9abb..0000000
--- a/tools/gen-inventory
+++ /dev/null
@@ -1,147 +0,0 @@
-#!/usr/bin/env python
-
-import ConfigParser as cp
-import json
-import os
-import re
-
-import requests
-import yaml
-
-
-IBM_CLOUD_URL = "https://us-south.iaas.cloud.ibm.com/v1/"
-IAM_URL = "https://iam.cloud.ibm.com/identity/token"
-
-IBM_CLOUD_GENERATION = "2"
-IBM_CLOUD_VERSION = "2019-08-09"
-
-API_KEY = None
-IAM_TOKEN = None
-SESS = requests.session()
-
-
-def tostr(obj):
-    ret = {}
-    for k, v in obj.items():
-        if isinstance(k, unicode):
-            k = k.encode("utf-8")
-        if isinstance(v, dict):
-            ret[k] = tostr(v)
-        elif isinstance(v, unicode):
-            ret[k] = v.encode("utf-8")
-        else:
-            ret[k] = v
-    return ret
-
-
-def load_api_key():
-    global API_KEY
-    path = os.path.expanduser("~/.couchdb-infra-cm.cfg")
-    if not os.path.exists(path):
-        print "Missing config file: " + path
-        exit(1)
-    parser = cp.SafeConfigParser()
-    parser.read([path])
-    API_KEY = parser.get("ibmcloud", "api_key")
-
-
-def load_iam_token():
-    global IAM_TOKEN
-    headers = {
-        "Accept": "application/json"
-    }
-    data = {
-        "grant_type": "urn:ibm:params:oauth:grant-type:apikey",
-        "apikey": API_KEY
-    }
-    resp = SESS.post(IAM_URL, headers=headers, data=data)
-    resp.raise_for_status()
-    body = resp.json()
-    IAM_TOKEN = body["token_type"] + " " + body["access_token"]
-    SESS.headers["Authorization"] = IAM_TOKEN
-
-
-def init():
-    load_api_key()
-    load_iam_token()
-
-
-def list_instances():
-    url = IBM_CLOUD_URL + "/instances"
-    params = {
-        "version": IBM_CLOUD_VERSION,
-        "generation": IBM_CLOUD_GENERATION,
-        "limit": 100
-    }
-    while url:
-        resp = SESS.get(url, params=params)
-        body = resp.json()
-        for instance in body["instances"]:
-            interface_url = instance["primary_network_interface"]["href"]
-            resp = SESS.get(interface_url, params=params)
-            instance["primary_network_interface"] = resp.json()
-            yield instance
-        url = body.get("next")
-
-
-def load_ci_agent(ci_agents, instance):
-    if instance["status"] != "running":
-        return
-
-    name = instance["name"]
-    net_iface = instance["primary_network_interface"]
-    floating_ips = net_iface.get("floating_ips", [])
-
-    if not floating_ips:
-        return
-
-    ip_addr = floating_ips[0]["address"]
-
-    ci_agents[ip_addr] = {
-        "instance": {
-            "id": instance["id"],
-            "name": instance["name"],
-            "created_at": instance["created_at"],
-            "profile": instance["profile"]["name"],
-            "vpc": instance["vpc"]["name"],
-            "zone": instance["zone"]["name"],
-            "subnet": net_iface["subnet"]["name"]
-        },
-        "ip_addrs": {
-            "public": ip_addr,
-            "private": net_iface["primary_ipv4_address"]
-        },
-        "boot_volume": {
-            "device": instance["boot_volume_attachment"]["device"]["id"],
-            "name": instance["boot_volume_attachment"]["volume"]["name"]
-        },
-        "system": {
-            "arch": instance["vcpu"]["architecture"],
-            "num_cpus": instance["vcpu"]["count"],
-            "ram": instance["memory"]
-        }
-    }
-
-
-def main():
-    init()
-
-    ci_agents = {}
-
-    for instance in list_instances():
-        if instance["name"].startswith("couchdb-ci-worker"):
-            load_ci_agent(ci_agents, instance)
-
-    inventory = {"all": {
-        "children": {
-            "ci_agents": {
-                "hosts": ci_agents
-            }
-        }
-    }}
-
-    print yaml.dump(tostr(inventory), default_flow_style=False)
-
-
-if __name__ == "__main__":
-    main()