PgBouncer vs pgpool-II vs built-in pooling—here's what we learned running all three
We spent six months this year untangling our connection management after hitting Postgres's connection limits during traffic spikes. Figured I'd share what we learned since this seems to be a rite of passage for any app that scales past a certain point.
The problem we had: Our Rails app was configured with a connection pool of 10 per process. With 20 Puma workers across 4 servers, that's 800 potential connections. Our RDS instance maxed out at 500. During traffic spikes, we'd see FATAL: too many connections errors and cascading failures as requests backed up.
What we tried:
PgBouncer ended up being our solution. It sits between your app and Postgres, maintaining a smaller pool of actual database connections and multiplexing application connections onto them. We run it in transaction mode, which means a connection is only "used" for the duration of a transaction, then returned to the pool.
The gotchas: Session-level features don't work in transaction mode. Prepared statements, advisory locks, SET commands that should persist—all broken. We had to audit our codebase for these. Also, LISTEN/NOTIFY doesn't work through PgBouncer, which broke one of our features.
pgpool-II is more feature-rich (load balancing, automatic failover, query caching) but also more complex. We tested it and found it was overkill for our needs. If you need read replica load balancing and don't want to handle it at the application level, it's worth considering. But for pure connection pooling, it's bringing a lot of machinery.
Built-in pooling (RDS Proxy / Supabase's Supavisor): If you're on a managed platform, this is increasingly viable. RDS Proxy has gotten better and handles a lot of the configuration automatically. The downside is you're adding another AWS service to your bill and your latency path.
What we settled on: PgBouncer in transaction mode, running as a sidecar on each application server. Connection pool in Rails reduced to 2 per process, PgBouncer pool size of 20 per server, feeding into a max of 100 actual Postgres connections. We went from connection exhaustion during spikes to never thinking about it.
The key insight: your application's "connection pool" and your database's actual connections don't have to be the same number. A pooler in the middle lets you decouple them.
What's your connection pooling setup? Anyone running into edge cases with transaction mode that I should watch out for?
0 Comments