In the second part of the Automating Barman with Puppet series we configured, via Puppet, two virtual machines: a PostgreSQL server and a Barman server to back it up. However, human intervention was required to perform the SSH key exchange and most of the manifest was written to allow the servers to access each other. In this third and final part of the series, we will look at how to configure a third VM that will act as the Puppet Master and use it to simplify the configuration of PostgreSQL and Barman.
The entire code of this tutorial is on GitHub at http://github.com/2ndquadrant-it/vagrant-puppet-barman.
Configuring the Puppet Master: Vagrant
First, change the Vagrantfile
to boot a third VM, called “puppet”, which will be our Puppet Master. To ensure that the machine is instantly accessible by the Puppet agents present on each VM, we add a “puppet” entry in the /etc/hosts
file with the first script we run. We need also to enable the Puppet agent, as Debian-like distributions disable it by default.
Finally, within the Vagrantfile
, let’s make a distinction between master and agents. The master will initially load its configuration straight from the manifest files, then the agents running on each host will apply the configuration sent from the master. Agents will also send data back to the master allowing other nodes to use it to build their configuration. For this reason, an agent is also set to run on the master.
The Vagrantfile
is as follows:
|
Configuring the Puppet Master: Puppet
Once we have the Vagrantfile
, it’s time to go and write a Puppet manifest for the master. Two additional modules are required: puppetlabs/puppetdb
and stephenrjonson/puppet
. puppetlabs/puppetdb
configures PuppetDB.
PuppetDB uses a PostgreSQL database to collect the events and resources exported by the infrastructure nodes so they can exchange information and configure each other.
stephenrjonson/puppet
allows you to configure a Puppet Master with Apache and Passenger as well as the Puppet agents on the various nodes of the network.
Our Puppetfile
will look like this:
|
We can now run
|
to install the new modules.
At this point we can edit the site.pp
manifest, adding the puppet
node with the following snippet for PuppetDB and the Puppet Master:
|
We have thus configured the Puppet Master to automatically accept connections from all the machines (autosign
) and distribute catalogues, events and exported resources (storeconfig
). Finally, we use the directory environments of Puppet to distribute the catalogue to the agents. The standard directory for environments is /etc/puppet/environments
and the default environment is production. Our manifests and modules will belong to it. As Vagrant already shares the directory where the Vagrantfile
is located with the machines it creates, we can make a symbolic link to it:
|
We need to configure the agent on every node, choose how it should be run and which environment to use, and point it towards the Puppet Master. Running the agent via cron takes up fewer resources than running it as a daemon:
|
We can now begin sharing resources between the nodes. The pg
and backup
nodes will need to communicate with each other via SSH, so they will need to know the ip of the other server and to contain its key in known_hosts
. We export and collect these resources on each node, as shown in the following snippet:
|
barman::autoconfigure
We now have everything we need to configure the PostgreSQL server and the Barman server. With the ability to use the autoconfiguration, the next step has just become much easier. For backup
, it’s as simple as setting the autoconfigure
parameter and exporting the right ip address. The Vagrant machines have two ip addresses, so we must force backup
to use 192.168.56.222
. Moreover, we are going to use the PGDG Barman package, enabling manage_package_repo
:
|
On the pg
node we install the PostgreSQL server and, through the barman::postgres
class, declare how Barman manages it. The class exports the cron for the execution of the barman backup pg
command and the definition of the server for Barman that will be imported by the backup
server via autoconfigure:
|
Testing
Everything we have looked at so far can be tested by cloning the project on GitHub and executing the following commands in the newly-created directory:
|
The system has to perform three provisionings (the first is included in the first vagrant up
) before all the exported resources are collected from the nodes. At this point we can log into the backup machine and check that backups can be performed:
|
Conclusion
Although the initial configuration of a Puppet Master can be laborious, its benefits are enormous. Not only is the configuration of Barman much easier – any other addition to the infrastructure is significantly simplified. For example, adding an Icinga or Nagios server becomes much simpler when every single server is able to export the services that need to be monitored (check_postgres
or barman check --nagios
).
Also, in the above example, we used a single PostgreSQL server and a Barman server, but in case of complex infrastructures with many database servers, it is possible to declare multiple Barman servers and use host_group
to identify the Postgres servers which the Barman servers should backup.
Thank you for reading the Automating Barman with Puppet series, I hope it has been useful and would love to know your thoughts.
Finally, a special thank you goes to Alessandro Franceschi for the initial idea of adding an autoconfiguration system to the Barman module.