Giulio Calacoci

Read Blogs

Technical Blog
Barman 2.1 Version 2.1 of Barman, backup and recovery manager for PostgreSQL, was released Thursday, Jan. 5. The new release, along with several bugfixes, introduces preliminary support for the upcoming PostgreSQL 10, and adds the –archive option to the switch-xlog command. switch-xlog –archive The new –archive option is especially useful when setting up a new server. Until now, the switch-xlog...
Technical Blog
On the 22nd of September I attended the 8th edition of the PostgreSQL sessions, a conference in Lyon organised by Dalibo and Oslandia, as a speaker. The conference was centered around PostgreSQL and PostGIS, and I had the honor to announce the imminent release of Barman version 2.0. I was really excited about the opportunity of speaking in front of a PostgreSQL community in France, and as a...
Technical Blog
The release of PostgreSQL 9.5 is imminent so the time has come to analyse what’s new in this latest version. A very interesting feature of version 9.5 is the ability to import a schema from a remote database, using Foreign Data Wrapper and the IMPORT FOREIGN SCHEMA command. Foreign Data Wrappers (FDW) Before the introduction of Foreign Data Wrappers, the only way to connect a Postgres database...
Technical Blog
The 1.4.0 version of Barman adds new features such as incremental backup and automatic integration with pg_stat_archiver which aim to simplify the life of DBAs and system administrators. Barman 1.4.0: the most important changes The latest release introduces a new backup mode, the inc remental backup. This mode allows the reuse of unmodified files between one periodic backup and another...
Technical Blog
Scenario: We have a remote datasource, served by a gpfdist server. We need to import the data in a Greenplum database, while performing some ETL manipulation during the import. It is possible to accomplish this goal with a simple transformation in a few steps using Kettle. For the sake of simplicity, I assume that: You know Kettle basics, like connections, jobs, and translations – for further...
Technical Blog
In the first part of this article we have created a job, a database connection and defined the flow in Kettle. In the second part we’ll see how Kettle manages the data import from the CSV files. Click on the “New” button on the top left of the Kettle window, and select “new transformation”. From the Design tab, drag a “CSV input” component. You can find it under the “ input” folder. Once you drop...
Technical Blog
Recently I have shown you how to perform a data import from a CSV file into a Greenplum database, using Talend Community Edition. In this article I’m going to perform the same task using another ETL tool, Kettle. For the sake of simplicity I’ll assume that: you have a very simple test database on Greenplum. With two tables and a logical 1:N relationship between them: we choose states and users (a...
Technical Blog
In the first part of this tutorial, we have set up all the connections required for creating the job, now we can proceed with data import. Let’s drag and drop inside the visual editor an object named tMap. You can find it on the left, in the instruments palette, inside the “elaboration” folder. Now, we need to connect the “states” CSV object with the tMap element (right-Click on the CSV element ->...
Technical Blog
hen working with databases, one of the most common task is to load data from one or more CSV files. Several tools are available to achieve this task. Some are executed via command line, like COPY (using psql), some are more complex, like ETL systems. We will start today with Talend but, in the next weeks, we will proceed with Kettle (Pentaho Data Integration). In this tutorial we will learn how to...