Improving Database Integration Test Performance in Node.js (TypeORM, Jest)
Index
Typically, integration tests use in-memory databases such as SQLite or H2.
Because these databases exist only in memory, they allow ORM (SQL) validation, enable parallel test execution, and provide high-speed query performance.
While most database queries can be executed against an in-memory database, many enterprise systems rely on complex queries that can only be tested on production-like databases.
In environments that actively use advanced database features—such as window functions, stored procedures, or triggers—it becomes difficult to fully validate logic with an in-memory DB.
For this reason, developers often use Docker-based database containers on their local machines to build a more realistic integration testing environment that mirrors production (e.g., MySQL or PostgreSQL).
Of course, in general, it’s still better to maintain a higher proportion of unit tests compared to integration tests that depend on external systems.

However, when refactoring a legacy system or working on a project that heavily depends on the database, it’s often inevitable to rely more on integration tests—at least until the overall architecture can be reorganized.
Ideally, you could restructure the project to increase the proportion of unit tests and reduce dependency on database-driven tests.
But if that’s not feasible right now, you can instead focus on improving the overall performance of database-based integration tests by following the approaches outlined below.
Problem
Let’s assume you have an integration testing environment that uses Mysql running in Docker.
The Docker configuration is as follows:
version: '3.8'
services:
mysql:
image: mysql:8.0
container_name: test_mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: test_db
MYSQL_USER: test_user
MYSQL_PASSWORD: test_password
ports:
- "3350:3306"
volumes:
- mysql_data:/var/lib/mysql
command:
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
volumes:
mysql_data:The testing environment is as follows:
- 2021 M1 Macbook Pro 16GB
- Docker Mysql 8.0
- Node 18.20
- Jest 30.2.0
And the test codes that use this
// /src/generator.ts
import {Point} from "@/entity/Point";
import {Repository} from "typeorm";
export async function bulkInsertPoints(count: number, pointRepository: Repository<Point>) {
for (let key = 0; key < count; key++) {
const point = new Point()
point.point = key * 1_000
await pointRepository.save(point);
}
}// src/__test__/point1.test.ts
import {AppDataSource} from "@/data-source";
import {Point} from "@/entity/Point";
import {bulkInsertPoints} from "@/generator";
describe("Point Entity Test 1", () => {
beforeAll(async () => {
await AppDataSource.initialize();
});
afterAll(async () => {
await AppDataSource.destroy();
});
describe('PointEntity: 10,000 times', () => {
it("Create Point 10_000 times", async () => {
const pointRepository = AppDataSource.getRepository(Point);
await bulkInsertPoints(10_000, pointRepository)
});
})
});All Code is here
Instead of inserting 60,000 records all at once, the tests were designed in a more realistic way — by splitting data across multiple test files so that the database connection (create/close) is executed multiple times.
The reason for using such a large dataset (60,000 records) is as follows:
- In integration tests, data is often required for pagination queries, complex statistical queries, or 1:N relational inserts.
- Each test can involve inserting 5 to N records, and it’s reasonable to assume there will be dozens or even hundreds of such tests in a real-world scenario.
Note:
Parallel processing with
Promise.allor bulk inserts were not considered valid solutions for this problem.In an actual environment, it’s not feasible to create hundreds or thousands of test files, so a
forloop was used instead to simulate repeated test execution.
Promise.all or bulk inserts are effective only when a single test requires a large volume of data, which is not the case here.
In a real integration testing setup using Docker-based PostgreSQL, tests must run sequentially (--runInBand) to prevent them from interfering with each other.
Running the above tests under these conditions took approximately 232 seconds in total.

- 30,000 inserts: 161 seconds
- 20,000 inserts: 96 seconds
- 10,000 inserts: 48 seconds
Summary:
- Total execution time: 306 seconds
- Average (sync) insert speed: roughly 3–4 seconds per 10,000 records
Now, let’s explore how we can optimize this slow and heavy integration test to achieve significantly better performance.
Solution
TypeORM (save -> insert)
ORMs such as TypeORM provide a variety of features related to entity relationships, including cascade operations and relations when saving entities.
However, these conveniences come with a cost — the commonly used .save() method performs additional operations such as running a SELECT query before INSERT to support upsert behavior and ensure entity integrity.
As a result, even simple data insertions can consume significant resources, becoming one of the major causes of performance degradation during integration tests.
Fortunately, most ORMs offer a way to bypass these overheads and execute pure SQL insert operations directly.
In TypeORM, this optimization can be achieved by using the .insert() method.
await pointRepository.insert(point);Now, let’s run the same tests again — but this time, we’ll simply replace the data-saving logic from**.save()to.insert()**.
This small change alone should allow us to measure how much performance improves when the ORM skips unnecessary overhead and executes pure insert queries.

- 30,000 inserts: 115 seconds
- 20,000 inserts: 79 seconds
- 10,000 inserts: 39 seconds
Summary:
- Total execution time: 234 seconds
- Average (sync) insert speed: roughly 2**–3 seconds per 10,000 records**
This change resulted in approximately a 20% improvement in overall performance.
swc-node/jest
We can further improve performance at the application level — not by optimizing the ORM this time, but by enhancing the test runner itself (Jest).
As mentioned in the article “Speeding Up Jest”, using ts-jest often leads to significant performance degradation due to its slower TypeScript transpilation process.
To address this, we can replace it with the much faster @swc-node/jest, which leverages the SWC compiler for high-performance TypeScript transpilation.
After making the change, we run the tests once again to measure the performance improvement.

- 30,000 inserts: 104 seconds
- 20,000 inserts: 70 seconds
- 10,000 inserts: 35 seconds
Summary:
- Total execution time: 210 seconds
- Average (sync) insert speed: roughly 2 seconds per 10,000 records
It was a minor improvement, but we achieved about a 9% performance gain.
Node + Jest
One remaining unresolved issue is that, starting from Node.js version 16.11 and above, Jest has been known to suffer from memory leak problems.
According to reports such as “Memory consumption issues on Node.js 16.11.0+”,
multiple test benchmarks indicate that Node.js 16.10 actually delivers better performance with Jest compared to version 18.12 or later.

Of course, such a noticeable performance difference generally appears only when you have hundreds of tests.
In real-world projects, it’s common to have not just hundreds but even 2,000–3,000 tests running in the suite.
However, in this experiment, that scale was omitted since generating that many test cases would be overly tedious and time-consuming.
If you notice that your test suite becomes slower over time, try running Jest with the --logHeapUsage flag to check whether memory usage keeps increasing as tests progress.
If it does, consider locking your Node.js version to 16.10, which has been shown to provide more stable performance and lower memory consumption with Jest.
Conclusion
Improving the performance of integration tests can have a significant impact on the overall productivity of both the team and the project.
There are many alternative testing approaches — such as unit tests, tests using test doubles (mocks/stubs), and in-memory database tests — and leveraging these can greatly enhance the overall test suite performance.
However, integration or end-to-end (E2E) tests that run in an environment identical to production are still essential.
When that’s the case, applying the optimization strategies discussed here can lead to meaningful performance improvements, ultimately boosting team-wide efficiency and development velocity.