02-Mar-2025 "Serverless" Postgres: A Developer's Dream with just a Pinch of Reality
Firebase has been around for 15 years with its real-time database offering.
Firebase is owned by Google for 10 years, and they provide a more flexible store for nearly as long.
But despite having plenty of goodies, they only offered a document database.
The niche of a relational database was vacant. Until Supabase took it.
Consistency enforcement, formally defined data relations, structured queries make a relational database stand out.
Firebase tries to catch up with its data connect.
There are numerous other products on the serverless relational database market.
The Problem
Lately, back-end is perceived as an unavoidable necessity, an API provider for browser and mobile apps, buried down.
Sometimes, it serves the only purpose - to store and provide data.
Back-end thins out, offloading logic to third-party service providers:
automation (Zapier);
image resizing, file uploads serving static assets (AWS S3);
DDoS protection, caching and load balancing (Cloudflare);
user authentication (Auth0 and WorkOS);
user management (Okta);
providing analytical data (Looker);
full-text search (Algolia);
pub-sub and real-time data;
logging;
transactional and bulk emails, push notifications;
... you name it.
New architectures offer a backend-less approach.
Well, the back-end and servers are still involved, just not owned or directly managed by us.
Back-end kept holding on to databases, which remained part of the core in every product built.
Self-hosting databases, be it Redis, Postgres, or any other - scaling it, and keeping it highly available is hard.
And it's a burden, taking the focus away from your business goals.
The missing part in the jigsaw was a relational database managed by a third-party.
Not missing anymore.
The Promise
Supabase, a full Postgres database accessible from the front-end via JavaScript API, allowing row-level security for granular user access to data, authentication, file uploads, real-time streaming, vector storage, message queues, and background job processing sounds just fantastic.
Yet, it is real.
Just imagine the possibilities.
Adding dynamic features as Medium-like highlights, contextual notes, comments; Reddit-like ↓↑ votes to a static site becomes trivial.
There's a whole list of projects made with Supabase, and it's a small subset of what was done before Supabase went GA.
more than a third of the current YC are building with Supabase
Wow!
That means something more complicated than just adding comments to a static site can be possible, right?
Thought experiment
Can you imagine that you can have a whole bunch of options to sign up users, and let them fetch and save only the data you declared as accessible to them?
- Any user can add a new `article` record
- Only the user can change the text or the title of their article
- Any user can add a comment on any other user's article
- Only the comment author or the article author can remove a comment
- You can vote on articles and comments, but only the ones you're not an author of
Database backups, scaling, high-availability, underlying "disk" storage, engine version upgrades, connection pooling - are those problems all transparently handled for you.
Saving big on development and DB maintenance time looks plausible.
And no more back-end development or maintenance either!
Isn't this beautiful? 🥹
It is worth saying that Supabase is not the only one.
I've discovered a few alternatives (in the order I've discovered them): Prisma - a battle-tested ORM that can be used with Supabase, and recently their own cloud Postgres, still in early access; Nile; Neon; and more...
How does it work internally? At a glance it's a mix of (realistically-looking pseudocode below):
-
A JS wrapper around a protocol allowing to access Postgres via HTTP
db.from('acticles').all()
-
authentication
auth.login('username', 'password')
or
auth.social.linkedInOIDC.signIn()
-
row-level security policies
ALTER TABLE articles ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Fetch own articles"
ON "public"."articles"
TO authenticated
USING ( current_user_id = author_id );
-
Common Table Expressions
WITH auth AS (
SELECT uid AS current_user_id FROM auth.sessions WHERE session_id = 'foo-bar-baz'
)
SELECT * FROM articles;
By stacking this together, you get a Postgres database that can be accessed via a JS API, and provide row-level granular access for users.
I've built a project backed by Supabase. Did it work the way it's promised, the way I imagined it to work? The reality of turned out to be completely different...