In the first part of this article we configured Vagrant to execute two Ubuntu 14.04 Trusty Tahr virtual machines, respectively called pg
and backup
. In this second part we will look at how to use Puppet to set up and configure a PostgreSQL server on pg
and back it up via Barman from the backup
box.
Puppet: configuration
After defining the machines as per the previous article, we need to specify the required Puppet modules that librarian-puppet
will manage for us.
Two modules are required:
puppetlabs/postgresql
(http://github.com/puppetlabs/puppetlabs-postgresql/) to install PostgreSQL on thepg
VMit2ndq/barman
(http://github.com/2ndquadrant-it/puppet-barman) to install Barman onbackup
Both modules will be installed from Puppet Forge. For the puppetlabs/postgresql
module, we’ll have to use version 4.2.0 at most at the moment, as the latest version (4.3.0) is breaking the postgres_password
parameter we’ll be using later (see this pull request). Let’s create a file called Puppetfile
containing this content in the project directory:
|
We can now install the Puppet modules and their dependencies by running:
|
Although not essential, it’s preferable to use the option --verbose
every time librarian-puppet
is used. Without it the command is very quiet and it’s useful to have details about what it’s doing in advance. For example, without using --verbose
, you may find out that you’ve wasted precious time waiting for a dependency conflict to be resolved, only to see an error many minutes later.
Upon successful completion of the command, a modules
directory containing the barman
and postgresql
modules and their dependencies (apt
, concat
, stdlib
) will be created in our working directory. In addition, librarian-puppet
will create the Puppetfile.lock
file to identify dependencies and versions of the installed modules, pinning them to prevent future updates. This way, subsequent librarian-puppet install
runs will always install the same version of the modules instead of possible upgrades (in case an upgrade is required, librarian-puppet update
will do the trick).
Now we can tell Vagrant we are using a Puppet manifest to provision the servers. We alter the Vagrantfile
as follows:
|
With the lines we’ve just added, we’ve given Vagrant the instructions to provision the VMs using manifests/site.pp
as the main manifest and the modules included in the modules
directory. This is the final version of our Vagrantfile
.
We now have to create the manifests
directory:
|
and write in it a first version of site.pp
. We’ll start with a very basic setup:
|
We can now start the machines and see that on backup
there is a Barman server with a default configuration (and no PostgreSQL on pg
yet). Let’s log into backup
:
|
and take a look at /etc/barman.conf
:
|
The next step is running a PostgreSQL instance on pg
. We must be aware of the parameters required by Barman on the PostgreSQL server, so we need to set:
wal_level
at least atarchive
levelarchive_mode
toon
archive_command
so that the WALs can be copied onbackup
- a rule in
pg_hba.conf
for access frombackup
All of these parameters can be easily set through the puppetlabs/postgresql
module. In addition, on the Barman server, we need:
- a PostgreSQL connection string
- a
.pgpass
file for authentication - a SSH command
- to perform the SSH key exchange
it2ndq/barman
generates a private/public keypair in ~barman/.ssh
. However, automatically exchanging the keys between the servers requires the presence of a Puppet Master which is beyond the objectives of this tutorial (it will be part of the next instalment, which will focus on the setup of a Puppet Master and the barman::autoconfigure
class) – therefore this last step will be performed manually.
We edit the site.pp
file as follows:
|
Having changed the manifest, the provision has to be rerun:
|
With the machines running, we can proceed with the key exchanges. We log into pg
:
|
and we create the keypair for the postgres
user, using ssh-keygen
, leaving every field empty when prompted (so always pressing enter):
|
The last command outputs a long alphanumeric string that has to be appended to the ~barman/.ssh/authorized_keys
file on backup
.
|
Similarly, we copy the public key of the barman
user into the authorized_keys
file of the postgres
user on pg
:
|
At this point, we make a first connection in both directions between the two servers:
|
We can run barman check
to verify that Barman is working correctly:
|
Every line should read “OK”. Now, to perform a backup, simply run:
|
A realistic configuration
The Barman configuration used so far is very simple, but you can easily add a few parameters to site.pp
and take advantage of all the features of Barman, such as the retention policies and the new incremental backup available in Barman 1.4.0.
We conclude this tutorial with a realistic use case, with the following requirements:
- a backup every night at 1:00am
- the possibility of performing a Point In Time Recovery to any moment of the last week
- always having at least one backup available
- reporting an error via
barman check
in case the newest backup is older than a week - enabling incremental backup to save disk space
We use the Puppet file
resource to create a .pgpass
file with the connection parameters and a cron
resource to generate the job to run every night. Finally, we edit the barman::server
to add the required Barman parameters.
The end result is:
|
Conclusion
With 51 lines of Puppet manifest we managed to configure a pair of PostgreSQL/Barman servers with settings similar to those we might want on a production server. We have combined the advantages of having a Barman server to handle backups with those of having an infrastructure managed by Puppet, reusable and versionable.
In the next and final post in this series of articles we will look at how to use a Puppet Master to export resource between different machines, thus allowing the VMs to exchange the parameters required for correct functioning via the barman::autoconfigure
class making the whole setup process easier.