On Developing OAuth

September 25, 2025

PostgreSQL 18 ships with a new framework for supporting OAuth 2.0, which is an open authorization system that has seen wide use on the Internet for years. I posted my first proof-of-concept for this back in 2021, so it’s been a long road, and I’m both excited and nervous to see it out in the wider world for the first time.

EDB’s Guang Yi Xu has written a great technical set of articles covering the feature, and I don’t intend to replicate that here. Instead, I’d like to talk a bit about the motivation and development process behind this code.

Why pick OAuth? And why now?

So, let me open with a bunch of strong opinions.

The default authentication method for Postgres, SCRAM, is on solid cryptographic footing, keeps receiving improvements, and will continue to be an all-around excellent choice for both machine-to-machine communication and small numbers of human clients. But I don’t think it scales to large human systems very well, because the maintenance boils down to individual password management for every possible (user, cluster) pair. And it’s natural for someone who’s trying to secure systems of multiple databases, talking to hundreds or thousands of people, to say, “I wish I could keep all this information in one place.”

The current solutions supported by Postgres for third-party authentication, unfortunately, are a bit of a mess. We have:

  • LDAP, which is just sending passwords in plaintext around and hoping for the best;
  • Kerberos, which in my opinion is difficult to audit and secure as an overall solution, has interoperability issues across platforms, and feels very much like an architecture that is steadily running out of steam; and
  • TLS client certificates, which – although I like them very much! – are difficult to maintain in practice for most people, who are quite reasonably not excited to run a certificate authority.
  • (Bonus: I guess there’s RADIUS too. I can’t comment on it; I’ve never seen it deployed in practice.)

When I first started working on the OAuth patchset, I had recently gotten fed up with the customer support implications of one particular LDAP deployment. The scripts which synchronized Postgres roles with the directory, regularly running from all the database clusters, had started to cause the LDAP server itself to fall over. And I wanted so badly to make that entire style of architecture disappear.

OAuth moves things forward in several ways. A client doesn’t need to care how users are authenticated, so you can move to the latest best practices for passwords/keys/MFA without Postgres needing to know about it. The client’s identity is kept separate from the user’s identity, so letting a program connect to databases on your behalf does not implicitly give that program the ability to do anything else in your name. And the things that actually give power to clients – the OAuth tokens – are time-limited and (usually) self-describing, so it’s possible for an OAuth validator running inside Postgres to make safe decisions without having to ping a central server all the time.

Now, at this point in the post, I could lie and say something like “and so OAuth fixed all our problems!” That would be a great way to get everyone to laugh at me in public. OAuth is also a mess. It’s a maze of specifications and drafts and implementations that claim compatibility but then don’t interoperate in weird corner cases because a little bit too much was left as a reader exercise. And frankly, that’s kind of par for the course when it comes to web-scale technologies, so it would be fair for you to wonder why I feel good about it.

I feel good about implementing OAuth, in spite of interoperability issues and underspecifications and missteps, because OAuth looks to me like an ecosystem of many people working across different organizations, on a variety of different important use cases, on top of solid open technologies like HTTPS and public-key crypto, to constantly improve the system that many other people rely on to secure their own stuff. And as someone who originally came from a web background, I think all that seems like a good long-term bet.

My crystal ball is very broken, so I may look back at this in a decade to find that it has not aged well. But as of today, I am optimistic. And it just so happens that I like reading IETF specs and synthesizing them into software. So I worked on this.

Who else worked on this and gave you input?

The credits for the two largest commits in the set [12] list more than a dozen reviewers from the Postgres community over the four-plus years that I was working on this. That includes code review, architectural review, security review, usability review, and so on. Peter Eisentraut also helped me quite a bit with the overall strategy, since structuring a huge patchset for review by humans is, in many ways, just figuring out how to tell a good story.

On the coding side, Daniel Gustafsson and Thomas Munro at Microsoft contributed a working server-side API, BSD support, documentation, and more. Daniel was also the committer for the final patchset. And EDB’s Kashif Zeeshan did an amazing amount of user testing, which among other things stopped me from shipping a critical usability flaw in the architecture.

To everyone who helped out with this, whether in hallway conversations or drive-by review or in-depth development: Thank you so much!

How did you stay motivated for four years?

Well, I certainly wasn’t working on it heads-down for four years straight. I took several very long breaks from the patchset – six months, nine months – and I’d go work on something else, get motivated again, and come back to it. And the breaks also gave other people time to get interested in the feature, which helped feed back into even more motivation to get it finished.

The “long tail” at the very end of the cycle, though – where you’re just deciding what needs to be gold-plated, and what’s okay to leave as-is for now? That’s still brutal. In many ways, the progress there was driven by the community’s reviewers, who are helping drive this same development cycle for other people across the whole Postgres project.

Why is this exciting for a dev? For a customer? For end users?

I’m excited about OAuth, because it takes steps towards an ecosystem where we base decisions less on usernames, and more on statements of what users are allowed to do. I think systems that work that way are more easily scalable. And it opens up niche use cases like anonymous access, where you know that an end user is allowed in, but you don’t have to know who it is.

For customers, I think there are DBAs who will be excited to move away from the sync scripts and other bespoke infrastructure that is needed to bridge the gaps between their SSO server and their databases. That’s not a quick flip of the switch, by any means, but an offramp is there. As validator implementations develop and mature, I think we can get to the point where DBAs can just set up their authentication the way they want it to work in their organizations, and move on to other more interesting things to do with their time.

As for end users… are end users ever really “excited” to log into things? I’m not sure. But I do hope that, for users who already need to abide by their organizations’ auth setups, this feature helps reduce friction and lets them use the authentication tools that they’re already familiar with in more places. (And if that’s not the case, I hope to hear about it on the lists so that we can improve things for v19!)

You know what's even more exciting? Future improvements! What are they?

Here are some of the things I wasn’t able to get to in the v18 implementation, that I think people are really going to want eventually. (Almost all of them relate to OAuth “flows”: the methods by which your Postgres client communicates with your OAuth provider to figure out who you are and what permissions you’re willing to give it.)

  • I want the ability to switch flows without writing custom code. In PG18, you can provide your own client flow for your own programs, but you can’t plug that same flow into other utilities without opening them up, too. That’s not going to be sufficient for everyone, and we need to find a way to (safely) allow users to switch to their own implementations.    
  • I want safe token caching for the flows that we ship. Just like no one wants to enter their password for every single action they take on their computer, no one wants to re-execute a client flow for every single connection. Custom flows (see above) can always implement their own caching, but we should provide something that can be used by default by the community. We just need to develop a safe place to put those tokens...    
  • I want a built-in machine-to-machine flow. The PG18 flow is designed for humans with a physical device, so autonomous scripts and services have to implement custom code if they want to use the same OAuth providers as people. (Those same scripts and services have other strong, non-OAuth, options available in Postgres, but it’ll make sense for some DBAs to further simplify and unify their setups.)    
  • I want tokens to be bound to their clients. If a token is leaked or somehow stolen in transit, it can be used to impersonate the client until it expires. (This is still an improvement over plaintext passwords, but it’s not great.) Since I started working on this feature, a couple of specifications have popped up to combat this, and I think we’ll need to move in that direction shortly.    
  • I want a built-in validator implementation. I was pretty disappointed that I couldn’t provide a fully “batteries included” experience for OAuth in v18. I’m not sure that it’ll really be possible until a bunch of popular providers agree on a large number of conventions for the tokens that they issue, but I’m holding out hope. I can dream.

So there’s plenty of work left. But it’s hard developing a feature in a userless vacuum, and I’m looking forward to feedback from people using this in real use cases. If you have some of that feedback, whether it’s thoughts on what you’d like next or thoughts on what you don’t like now, please let me know on the lists!

Share this