Generated Primary and Foreign Keys for Distributed Databases

November 04, 2015

Common practice in relational database design is to utilize a non-repeating, ordered series of integers for primary keys (PK) and by extension, foreign keys (FK).  This practice has worked so well and efficiently for database designers that all commercially viable RDBMSs provide some means of automatically generating non-repeating, ordered integers.  For the sake of this discussion, let’s refer to them as SEQUENCEs because that is the term used by many of the DBMSs we work with.

Sequences have worked so well because they satisfy the two most important indexing requirements of RDBMSs:

  1. They are unique, so they can uniquely identify a table row.
  1. They are ordered, so they enable efficient clustered indexes for performance.

As long as databases were monolithic entities, sequences fulfilled their role perfectly.  With the introduction of distributed RDBMSs, through technologies like  multi-master replication (MMR), sequences alone can no longer guarantee the vital PK and FK attribute of uniqueness.  We have to give them some help.

Any solution to extend or replace sequences as generated keys must meet the following requirements:

  1. They must guarantee uniqueness across multiple systems and databases.
  1. They must not impair the efficiency of clustered indexes as commonly implemented by RDBMSs.
  1. They must allow for the effective migration of pre-existing sequence-based databases.
  1. They must be reliable and maintainable.

There are a few types of solutions being tried to meet the index requirements of distributed data.  Some work well and others can be problematic.  Let’s examine some of them.

  1. Define sequence ranges or sets that are not duplicated across databases.  For example:

Database1  -- Start With 1001 increment by 5, i.e., (1001,1006,1011)

Database2  -- Start With 1002 increment by 5, i.e., (1002,1007,1012)

Database3   -- Start with 1003 increment by 5, i.e., (1003,1008,1013)

This appears to be the simplest solution and satisfies requirements 1 and 2.  However, it does not satisfy requirement 3 very effectively and is very questionable against requirement 4.  This solution (and other solutions that try to carve out sections of a sequence) does not scale well.  In any event, to migrate to this solution, one must update every existing row, that uses the old sequence, with the new sequence numbers and update all the foreign key references.  Maintenance is another weakness of this solution.  If we need to add new databases to the mix, our increments may no longer work.

  1. Replace sequences with GUIDs.  This solution meets requirement 1 very well.  However, with many DBMSs, it will fail requirement 2 and make clustered indexes much less efficient because GUIDs are typically un-ordered (though one can offset the problem with a concatenated key including something like a timestamp or local sequence).  It does not make migration particularly easy and so barely meets requirement 3 as one must follow the same steps as solution 1.  This solution does meet requirement 4 very well.
  1. Use a database identifier and a sequence in a concatenated (2-column) key.  The concatenated key is guaranteed to be unique across all connected databases and all concatenated keys should use the database identifier of the database that executed the original insert statement for the row.  A database identifier is an integer column created by the customer.  Its value must be unique to each database node. This solution fully meets requirements 1 and 2, and it permits a database to be converted through simple SQL scripts, preserving the original sequences, and meeting requirement 3. Unfortunately, it does require modification of existing SQL queries to accommodate the additional key column.  It also fully meets requirement 4.
  1. A refinement of solution 3 is to create a single key column that encapsulates both the database identifier and a serial integer.  Please view the sample code, below.  This solution meets all four requirements.  SQL function mmr_sequence_convert(old_value) can be used to update all the original PK and FK values through SQL scripts and the existing sequence or serial index columns can be ALTERed with BIGINT NOT NULL DEFAULT mmr_sequence_nextval(‘seq_name’) to handle new INSERTs.  This approach has the advantage of not breaking FK relationships or existing SQL queries.

The distributed databases made possible by technologies such as the xDB Multi-Master Replication Server are powerful data management tools.  With a little extra forethought, they can live up to their promise.

Examples for MMR-ready Sequences (sincere appreciation to my EDB colleagues Vibhor Kumar and Adam Wright for their PostgreSQL knowledge and assistance)

1.  Create a unique identifier for the database (you need both statements and integer value should not exceed 1024):

ALTER DATABASE dbname SET cluster.unique_db_id TO integer;

SET cluster.unique_db_id TO integer;

2.  Create a sequence (if  one doesn’t exist):


3.  Create a functions to use MMR-ready sequence (as written, these functions support unique database identifiers from 1 thru 1024):

-- Gets the next MMR-ready sequence value

CREATE OR REPLACE FUNCTION mmr_sequence_nextval(seq_id varchar2)

RETURNS bigint




SELECT ((SELECT current_setting('cluster.unique_db_id'))::bigint << 52)::bigint + nextval($1::regclass);


-- Gets the last used MMR-ready sequence value

CREATE OR REPLACE FUNCTION mmr_sequence_currval(seq_id varchar2)

RETURNS bigint




SELECT ((SELECT current_setting('cluster.unique_db_id'))::bigint << 52)::bigint + currval($1::regclass);


-- Convert existing standard sequence value to an MMR-ready one

CREATE OR REPLACE FUNCTION mmr_sequence_convert(old_seq_value bigint)

RETURNS bigint




SELECT ((SELECT current_setting('cluster.unique_db_id'))::bigint << 52)::bigint + $1;


4.  Use the MMR-ready sequence:

CREATE TABLE tab_name (

    id BIGINT NOT NULL DEFAULT mmr_sequence_nextval(‘seq_name’),

    field varchar2(20)


Dick Dowdell is a Senior Architect, xDB Replication Server, at EnterpriseDB.

Share this

Relevant Blogs

More Blogs

Moving Tablespaces in PostgreSQL

Tablespaces are designed to allow Postgres clusters to be spread across multiple storage devices. Create tablespace creates a symbolic link in the pg_tblspc directory in the cluster's data directory pointing...
October 16, 2018