promise.allperformance@supercharge/promise-pool

Improving performance with a Promise Pool

INHO LEE
June 14th, 2023

Intro

In a Node.js environment, handling a large number of asynchronous tasks is quite common.

For example, consider these scenarios:

  • You need to fetch a large number of records from an external service that only allows lookups by ID
  • You need to publish a large number of event messages
  • Using JOIN or subqueries would cause heavy performance issues and database load, so you need to split queries by ID

In such cases, Node.js makes it easy to manage concurrent work with Promise.all.

But let’s look at how we can go beyond Promise.all to improve performance even further—using a Promise Pool.

When dealing with hundreds (or even thousands) of Promises, it’s usually not efficient to execute them all at once with Promise.all.

For example, suppose you need to run hundreds of queries in a Node.js environment. Because of database connection pool limits, you typically break them into manageable

**chunks**and run each chunk with Promise.all

If your database connection pool is limited to 50 connections and you need to run 1,000 queries, you would usually divide the queries into chunks of 50 or fewer and then execute each chunk with Promise.all. (In real-world production environments, the chunk size is often set much lower than the maximum pool size, since other servers and processes may also be running queries at the same time.)

This chunk-based execution pattern with Promise.all looks something like this:

img.png

Each chunk runs its Promises in parallel, allowing the tasks to be processed efficiently.

In the Chunk + Promise.all approach, the execution time of a chunk is determined by the longest-running Promise within that chunk.

As a result, no new Promises start until the longest one in the current chunk finishes—leading to potential inefficiency.

To address this inefficiency, you can manage Promises using a Promise Pool.

The Promise Pool approach works by laying out a fixed number of “rails” (or slots). As soon as one Promise finishes, the next one takes its place—keeping the pool consistently filled with the specified number of concurrent tasks.

img_%281%29.png

On each rail, as soon as a Promise finishes, the next one starts running in that empty slot.

Test

Let’s try running both approaches — a plain Promise.all with chunks and the @supercharge/promise-pool library — and then compare the results.

// Example tasks (simulating async queries with random delays)
const tasks = Array.from({ length: 20 }, (_, i) => async () => {
  const delay = Math.floor(Math.random() * 2000) + 500; // 0.5–2.5s
  await new Promise(res => setTimeout(res, delay));
  return `Task ${i + 1} done in ${delay}ms`;
});
 
// -----------------------------
// 1. Using Promise.all with chunks
// -----------------------------
async function runWithChunks(tasks, chunkSize) {
  const results = [];
  for (let i = 0; i < tasks.length; i += chunkSize) {
    const chunk = tasks.slice(i, i + chunkSize).map(fn => fn());
    const chunkResults = await Promise.all(chunk);
    results.push(...chunkResults);
  }
  return results;
}
 
// -----------------------------
// 2. Using @supercharge/promise-pool
// -----------------------------
const { PromisePool } = require('@supercharge/promise-pool');
 
async function runWithPromisePool(tasks, poolSize) {
  const { results } = await PromisePool
    .for(tasks)
    .withConcurrency(poolSize)
    .process(async fn => await fn());
  return results;
}
 
// -----------------------------
// Run comparison
// -----------------------------
(async () => {
  console.time('Chunks');
  const chunkResults = await runWithChunks(tasks, 5);
  console.timeEnd('Chunks');
 
  console.time('PromisePool');
  const poolResults = await runWithPromisePool(tasks, 5);
  console.timeEnd('PromisePool');
 
  console.log({ chunkResults, poolResults });
})();

Here’s a result comparison I ran (20 async tasks, concurrency of 5):

  • Chunk + Promise.all → ~8.65 seconds
  • Promise Pool → ~6.54 seconds

The Promise Pool finished about 25% faster, since new tasks were able to start immediately as soon as a slot freed up, instead of waiting for the slowest task in each chunk.

output.png

Conclusion

Promise.all is effective at running Promises in parallel, but once you split them into chunks, the total execution time becomes the sum of the longest task in each chunk.

If the execution times of a large number of Promises vary widely, using a Promise Pool can offer a meaningful performance improvement.