CouchDB “in a snap”

Table of Contents

  1. Installation
  2. Configuration
  3. Clustering
  4. Building

Installation

Downloading from the snap store

The snap can be installed from a file or directly from the snap store:

$ sudo snap install couchdb

If this is your first time installing couchdb then you will need to set an admin password and manually start CouchDB.

$ sudo snap set couchdb admin=[your-password-goes-here]
$ sudo snap start couchdb 

Enable snap permissions

The snap installation uses AppArmor to protect your system. CouchDB requests access to two interfaces: mount-observe, which is used by the disk compactor to know when to initiate a cleanup; and process-control, which is used by the indexer to set the priority of couchjs to ‘nice’. These two interfaces are required for CouchDB to run correctly.

To connect the interfaces type:

$ sudo snap connect couchdb:mount-observe
$ sudo snap connect couchdb:process-control

Configuration

Be sure to read the CouchDB documentation first.

CouchDB defaults are stored read-only in /snap/couchdb/current/opt/couchdb/etc/. This includes default.ini and any default.d/* files added in the snap build process. These are all read-only and should never be changed.

User-configurable files are stored in /var/snap/couchdb/current/etc/ and are writeable. Changes may be made to local.ini or placed in any local.d/*.ini file. Configuration management tools (like puppet, chef, ansible, and salt) can be used to manage these files.

Erlang settings are stored in the /var/snap/couchdb/current/etc/vm.args file. The snap configuration tool can be used to quickly change the node name and security cookie:

$ sudo snap set couchdb name=couchdb@1.2.3.4 setcookie=cutter

Be sure to read vm.args to understand what these settings do before changing them.

Any configuration file changes require restarting CouchDB before they are effective:

$ sudo snap restart couchdb

Monitoring CouchDB

The logs, by default, are captured by journald. View the logs with either command:

$ snap logs couchdb -f
$ journalctl -u snap.couchdb* -f

Removing CouchDB

There are several difference between installation via ‘apt’ and ‘snap’. One important difference is when removing couchdb. When calling ‘apt remove couchdb’, the binaries are removed but the configuration and the couch database files remain, leaving the user to clean up any databases latter.

Calling ‘snap remove couchdb’ will remove binaries, configurations and the database.

On newer versions of snapd (snapd 2.39+) a snapshot is made of the SNAP_DATA and SNAP_COMMON directories and this is stored (subject to disc space) for about 30 days. On these newer version a ‘snap remove’ followed by a ‘snap install’ may restore the database; but you are best to make your own backup before removing couchdb. If you do not want to keep the configuration or database files you can delete the snapshot by calling snap remove with the --purge parameter.

To remove your installation either:

$ sudo snap remove couchdb
$ sudo snap remove couchdb --purge

Clustering

You can set up a snap-based cluster on your desktop in no time using the couchdb snap.

Create three nodes

In the example below, we are going to set up a three node CouchDB cluster. (Three is the minimum number needed to support clustering features.) We'll also set up a separate, single machine for making backups. In this example we will be using parallel instance of snaps that is availble from version 2.36.

First we need to enable parallel instances of snap.

$ snap set system experimental.parallel-instances=true

We install couchdb via snap from the store and enable interfaces, open up the bind address and set a admin password.

$> snap install couchdb_1
$> snap connect couchdb_1:mount-observe
$> snap connect couchdb_1:process-control
$> snap set couchdb_1 name=couchdb1@127.0.0.1 setcookie=cutter port=5981 admin=Be1stDB

You will need to edit the local configuration file to manually set the data directories. You can find the local.ini at /var/snap/couchdb_1/current/etc/local.ini ensure that the [couchdb] stanza should look like this

[couchdb]
;max_document_size = 4294967296 ; bytes
;os_process_timeout = 5000
database_dir = /var/snap/couchdb_1/common/data
view_index_dir = /var/snap/couchdb_1/common/data

Start your engine ... and confirm that couchdb is running.

$> snap start couchdb_1

$> curl -X GET http://localhost:5981

Then repeat for couchdb_1, couchdb_2 and couchdb_bkup, editing the local.ini and changing the name, port number for each. They should all have the same admin password and cookie.

$> snap install couchdb_2
$> snap connect couchdb_2:mount-observe
$> snap connect couchdb_2:process-control
$> snap set couchdb_2 name=couchdb2@127.0.0.1 setcookie=cutter port=5982 admin=Be1stDB
$> snap install couchdb_3
$> snap connect couchdb_3:mount-observe
$> snap connect couchdb_3:process-control
$> snap set couchdb_3 name=couchdb3@127.0.0.1 setcookie=cutter port=5983 admin=Be1stDB

Enable CouchDB Cluster (using the http interface)

Have the first node generate two uuids

$> curl http://localhost:5981/_uuids?count=2

The each instances within a cluster needs to share the same uuid ...

curl -X PUT http://admin:Be1stDB@127.0.0.1:5981/_node/_local/_config/couchdb/uuid -d '"f6f22e2c664b49ba2c6dc88379002548"'
curl -X PUT http://admin:Be1stDB@127.0.0.1:5982/_node/_local/_config/couchdb/uuid -d '"f6f22e2c664b49ba2c6dc88379002548"'
curl -X PUT http://admin:Be1stDB@127.0.0.1:5983/_node/_local/_config/couchdb/uuid -d '"f6f22e2c664b49ba2c6dc88379002548"'

... and a (different) but common secret ...

curl -X PUT http://admin:Be1stDB@127.0.0.1:5981/_node/_local/_config/couch_httpd_auth/secret -d '"f6f22e2c664b49ba2c6dc88379002a80"'
curl -X PUT http://admin:Be1stDB@127.0.0.1:5982/_node/_local/_config/couch_httpd_auth/secret -d '"f6f22e2c664b49ba2c6dc88379002a80"'
curl -X PUT http://admin:Be1stDB@127.0.0.1:5983/_node/_local/_config/couch_httpd_auth/secret -d '"f6f22e2c664b49ba2c6dc88379002a80"'

... after which they can be enabled for clustering

curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@127.0.0.1:5981/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"Be1stDB", "node_count":"3"}'
curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@127.0.0.1:5982/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"Be1stDB", "node_count":"3"}'
curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@127.0.0.1:5983/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"Be1stDB", "node_count":"3"}'

You can check the status here.

curl http://admin:Be1stDB@127.0.0.1:5981/_cluster_setup
curl http://admin:Be1stDB@127.0.0.1:5982/_cluster_setup
curl http://admin:Be1stDB@127.0.0.1:5983/_cluster_setup

Configure CouchDB Cluster (using the http interface)

Next we want to join the three nodes together. We do this through requests to the first node.

curl -X PUT "http://admin:Be1stDB@127.0.0.1:5981/_node/_local/_nodes/couchdb2@127.0.0.1" -d '{"port":5982}'
curl -X PUT "http://admin:Be1stDB@127.0.0.1:5981/_node/_local/_nodes/couchdb3@127.0.0.1" -d '{"port":5983}'

curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@127.0.0.1:5981/_cluster_setup -d '{"action": "finish_cluster"}'

curl http://admin:Be1stDB@127.0.0.1:5981/_cluster_setup

If everthing as been successful, then the three notes can be seen here.

$> curl -X GET "http://admin:Be1stDB@127.0.0.1:5981/_membership"

Now we have a functioning three node cluster. Next we will test it.

An Example Database

Let's create an example database ...

$ curl -X PUT http://admin:Be1stDB@localhost:5981/example
$ curl -X PUT http://admin:Be1stDB@localhost:5981/example/aaa -d '{"test":1}' -H "Content-Type: application/json"
$ curl -X PUT http://admin:Be1stDB@localhost:5981/example/aab -d '{"test":2}' -H "Content-Type: application/json"
$ curl -X PUT http://admin:Be1stDB@localhost:5981/example/aac -d '{"test":3}' -H "Content-Type: application/json"

... and verify that it is created on all three nodes ...

$ curl -X GET http://localhost:5981/example/_all_docs
$ curl -X GET http://localhost:5982/example/_all_docs
$ curl -X GET http://localhost:5983/example/_all_docs

... and is separated into shards on the disk.

  $ ls /var/snap/couchdb_?/common/data/shards/

Backing Up CouchDB

The backup machine we will configure as a single instance (n=1, q=1).

$> snap install couchdb_bkup
$> snap set couchdb_bkup name=couchdb0@localhost setcookie=cutter port=5980 admin=Be1stDB
$> curl -X PUT http://admin:Be1stDB@localhost:5980/_node/_local/_config/cluster/n -d '"1"'
$> curl -X PUT http://admin:Be1stDB@localhost:5980/_node/_local/_config/cluster/q -d '"1"'

We will manually replicate to this from one (can be any one) of the nodes.

$ curl -X POST http://admin:Be1stDB@localhost:5980/_replicate \
    -d '{"source":"http://localhost:5981/example","target":"example","continuous":false,"create_target":true}' \
    -H "Content-Type: application/json"
$ curl -X GET http://admin:Be1stDB@localhost:5980/example/_all_docs

The backup database has a single shard and single directory:

  $ ls /var/snap/couchdb_bkup/common/data/shards/

Remote Shell into CouchDB

In the very rare case you need to connect to the couchdb server, a remsh script is provided. You need to specify both the name of the server and the cookie, even if you are using the default.

/snap/bin/couchdb.remsh -n couchdb@localhost -c monster

Building this snap

This build requires core20. On Ubuntu 18.04, you will need to do the following:

sudo apt remove snapcraft
sudo snap remove snapcraft
sudo snap install snapcraft --classic

Then building is easy:

$ snapcraft

The self-built snap will need to be installed using --dangerous:

sudo snap install ./couchdb_3.0.0_amd64.snap --dangerous

Clean up with:

$ sudo snapcraft clean          # sudo may not be required but is safe