NodeJS and PostgreSQL Connection Pool
Index
Intro
In addition to the basic ID/password setup, a database usually requires several other important configurations. Among them, one of the most critical settings for running a production service is the connection pool.
Let’s take a closer look at why connection pool settings are necessary, and how they should be configured.
Experimental environment
For this experiment, we’ll be using the following environment:
- Node.js 16
- PostgreSQL 14 (via Docker)
We’ll also prepare a simple setup using Node and the pg library, as shown below:
const express = require('express');
const pg = require('pg');
const app = express()
const port = 3000
const client = new pg.Pool({
host: 'localhost',
user: 'test',
password: 'test',
database: 'test',
port: 5432,
max: 5,
})
client.connect(err => {
if (err) {
console.log('Failed to connect db ' + err)
} else {
console.log('Connect to db done!')
}
})
app.get('/', (req, res) => {
res.send('Hello World!')
});
app.get('/test-timeout', async (req, res) => {
const start = new Date();
try {
await client.query('SELECT pg_sleep(3);');
const lag = new Date() - start;
console.log(`Lag: \t${lag} ms`);
} catch (e) {
const lag = new Date() - start;
console.log(`Lag: \t${lag} ms`);
console.error('pg error', e);
}
res.send('test-timeout!');
});
app.listen(port, () => {
console.log(`Example app listening at http://localhost:${port}`)
});Here’s a very simple example:
- On each API call, we deliberately run a 3-second query:
- Then we log how long it actually took
The key point here is the connection pool setting:
const client = new pg.Pool({
host: 'localhost',
user: 'test',
password: 'test',
database: 'test',
port: 5432,
max: 5,
})This sets the maximum number of connections that can be created to 5. (Default: 10)
If you set max (maxConnections) to 5, what would happen?
Test
To simulate a certain level of traffic, the test will be conducted using Apache Bench.
After starting Node and PostgreSQL (via Docker), enter the following command in the terminal:
ab -n 100 -c 20 http://localhost:3000/test-timeout/- -n 100
- This is the total number of requests (the total count of requests that all users combined will send)
- -c 20
- This is the total number of concurrent users. (20 concurrent users are sending requests)
Making the request like this will produce the following results.

In this test, 20 concurrent users each made a total of 100 calls.
-
Average time per request
- The first value under Time per request was 15,637 ms, meaning each request took about 15 seconds on average.
-
Expected average response time
Under Percentage of the requests served..., the responses ranged between 15,027 ms and 15,047 ms.
-
Requests per second
This shows the maximum throughput per second. The result was 1.28, meaning the system could process only one full request per second.
Looking at this, the test shows an average response time of 15 seconds.
But here’s the question: with a 3-second query called by 20 users, why did the average response time end up being 15 seconds?
What is a connection pool?
A connection pool is, simply put, a mechanism where a client request is given a connection, and once the processing is done, that connection is returned and stored back in the pool.
It’s a well-known database pattern designed to reduce the overhead involved in establishing database connections and performing read/write operations.
When connecting to and using a database in the usual way, the process looks like this:
- Open a database connection using the database driver
- Open a TCP socket for reading and writing data
- Read/write data through the socket
- Close the connection
- Close the socket
Establishing a database connection is a fairly expensive operation.
That’s why it should be minimized as much as possible.
This is where connection pooling becomes essential.
By configuring the system to reuse existing connections, you can significantly cut down on the high cost of database operations, leading to a major boost in overall performance for database-driven applications.
That’s why connection pooling is considered essential for virtually all database-based applications.
Connection Pool in the test
For example, if you set max: 4, internally it will work as follows:

If 7 requests requiring DB queries arrive at the same time, the maximum 4 connections will be busy for 3 seconds, and the remaining 3 requests will all be stuck waiting.
So if you look at the Node logs from the test, you can see the 3-second delays stacking up.
(In other words, while the earlier requests are being processed, the remaining requests accumulate waiting time.)

Maximum wait time
After seeing the experiment, one concern naturally comes to mind.
What if 1,000 or 2,000 requests arrive simultaneously? You might start worrying about things like:
- “What if the wait time just to establish a connection takes several minutes?”
- The large number of unprocessed requests sitting in the queue could increase server load and even cause outages.
- In that case, it might be better to cancel any requests that wait beyond a certain threshold, reducing the load on the server instead.
In that case, you can use the connectionTimeoutMillis setting.
const client = new pg.Pool({
...
connectionTimeoutMillis: 30000,
max: 5,
})With this setting, the maximum connection wait time is set to 30 seconds.
- The default value for
connectionTimeoutMillisis unlimited. - The unit for
connectionTimeoutMillisis milliseconds (ms).
Set the maximum connection wait time to 10 seconds and run the same load test again.
This time, any request waiting more than 10 seconds will be canceled immediately.

The overall average response time is reduced from 15 seconds to 12 seconds.
(Of course, the number of canceled requests also increases.)

Precautions
Looking at these results, you might think the solution is simply to increase the number of connections. But that’s not necessarily the case. The real limitation is whether the database itself can actually handle that many connections.
In general, maintaining a large number of connections requires a proportional amount of database memory. That’s why managed database services, such as those on AWS, set connection limits by default to match the instance specifications.
For example, the formula for calculating MaxConnections in AWS RDS MySQL is: {DBInstanceClassMemory / 12582880}
So, if you’re using a t2.micro RDS instance:
- Memory = 512MB
- Therefore:
(512 * 1024 * 1024) / 12582880 ≈ 40
Which means the RDS will set max_connections = 40.
In other words, with a t2.micro, no more than 40 connections can be active at the same time across everything — your Node application plus any database GUI tools.
That’s why it’s important to set an appropriate connectionLimit in your Node connection options. To determine the right value, you should consider:
- How many servers your team is running the Node application on
- Whether each server runs multiple Node processes via PM2 or clustering
- How many servers you might scale up to under sudden traffic spikes
- How many developers are accessing the database from their local PCs
By reviewing these factors, you can set a reasonable max value for your connection pool.