CypressJavaScriptQAAutomationDatabaseValidationAPITestingSupabaseJenkinsCypressCloud

How I Built a QA Automation Suite with 1,434 Tests and 99% Coverage for a Personal Project

Most people treat QA automation as something you do at work — when the team grows, when the product matures, or when a manager mandates it. I did it for a side project. Not because I had to, but because I wanted to see what it actually takes to build one from scratch, on real code, end-to-end. This is a hands-on account of how I set up a full QA automation suite for a personal management app built on Next.js 15 + Supabase, using Cypress as the test runner. By the end, the suite had 1,434 automated test cases across 61 spec files, covering 99% of features (75/76), with zero mocked databases.

the implementation

The App

The app has two domains:

  • Inventory Management — stock tracking, usage logs, product brands, purchase history
  • Stock Trading — trades, fees, events, portfolio tracking

Both domains share the same auth layer (Supabase SSR + JWT), the same API structure (Next.js route handlers), and the same UI stack (shadcn/ui + Tailwind).

The decision to build serious tests for a side project came from a simple frustration: every time I touched inventory logic, I'd accidentally break trading features — and only find out after manually clicking around. That's the problem automation solves.

Why Cypress for E2E

The choice was deliberate. I considered:

  • Vitest / Jest — great for unit tests, but I wanted to test the full stack: API routes, database writes, UI behavior, and auth flows together
  • Playwright — excellent, but Cypress has a more ergonomic API for my use case and better custom command support
  • Cypress — wins because of cy.session() for auth caching, the task plugin system for DB access, and how naturally it handles both API and UI tests in the same framework

Project Structure

javascript
1cypress/
2├── e2e/
3│ ├── auth/
4│ ├── api-auth/
5│ ├── landing_page/
6│ ├── inventory_management/
7│ │ ├── dashboard/
8│ │ ├── product/
9│ │ ├── product_brand/
10│ │ └── product_name/
11│ └── trading_management/
12│ ├── trade/
13│ ├── fee/
14│ └── event/
15├── support/
16│ ├── common/ ← custom Cypress commands per domain
17│ ├── db/ ← direct Supabase query helpers
18│ └── engine/ ← Supabase client factory
19└── plugin/
20 └── tasks/ ← Node.js tasks (auth, DB operations)
20 lines

Each domain has its own set of custom commands split by concern:

javascript
1support/common/inventory/product/
2├── api-commands.js ← cy.AddProduct(), cy.UpdateProduct(), etc.
3└── db-commands.js ← cy.getProductFromDb()
3 lines

This separation keeps test files clean and makes commands reusable across specs.

The Auth Problem (And How I Solved It)

Auth is always the first bottleneck in E2E testing. The app uses Supabase SSR with JWT cookies — real session tokens, not mocked. Two problems emerged immediately:

Problem 1: Login is slow. Making a real Supabase auth call before every test would add ~2-3 seconds per spec. With 61 spec files, that's significant.

Solution: cy.session() with cross-spec caching.

javascript
1Cypress.Commands.add("login", (email, password) => {
2 cy.env(["TEST_EMAIL", "TEST_PASSWORD"]).then(({ TEST_EMAIL, TEST_PASSWORD }) => {
3 return cy.session(
4 [testEmail, testPassword],
5 () => {
6 cy.task("getSupabaseSession", { email: testEmail, password: testPassword })
7 .then((session) => {
8 cy.setCookie("cypress-session-token", session.access_token);
9 cy.window().then((win) => {
10 win.localStorage.setItem("cypress-session", JSON.stringify(session));
11 });
12 });
13 },
14 {
15 cacheAcrossSpecs: true, // ← session survives across spec files
16 validate() {
17 cy.getCookie("cypress-session-token").should("exist");
18 },
19 }
20 );
21 });
22});
22 lines

The session is obtained once via a Cypress task (Node.js context, calling Supabase directly), then cached. All subsequent specs restore it from cache instead of re-authenticating.

Problem 2: UI tests need auth bypass. When testing UI behavior — filtering, sorting, dialogs — I don't want auth middleware getting in the way. Full login is overkill.

Solution: Secret-based bypass cookie.

javascript
1Cypress.Commands.add("enableBypass", () => {
2 cy.env(["CYPRESS_AUTH_SECRET"]).then(({ CYPRESS_AUTH_SECRET }) => {
3 cy.setCookie("cypress-bypass", CYPRESS_AUTH_SECRET, {
4 path: "/", httpOnly: false
5 });
6 });
7});
7 lines

The middleware checks for this cookie and skips JWT validation when it matches CYPRESS_AUTH_SECRET (an env var that only exists in test environments). For UI tests, I call cy.loginWithBypass() — one command that does both.

No Mocked Database

This is a deliberate choice that I want to highlight.

Every test that touches data hits a real Supabase database — a dedicated test project, not production. This means:

  • Tests catch real constraint violations (not-null, foreign keys, unique)
  • Query performance issues surface during test runs
  • No mock/prod divergence — the bugs you find are real bugs

The trade-off: tests are slower and require cleanup. I handle cleanup in after() hooks, using the same API commands the app exposes:

javascript
1after(() => {
2 cy.DeleteProduct(productId).then((res) => {
3 expect(res.status).to.eq(200);
4 });
5});
5 lines

For DB verification (confirming an API response matches what's actually stored), I query Supabase directly:

javascript
1// support/db/inventory/getProductListFromDb.js
2export async function getProductListFromDb(supabase, productId) {
3 const { data, error } = await supabase
4 .from("product_list")
5 .select("*")
6 .eq("id", productId)
7 .is("deleted_at", null)
8 .single();
9
10 if (error && error.code === "PGRST116") return null;
11 if (error) throw new Error(error.message);
12 return toProductListDto(data);
13}
13 lines

This gives me a second source of truth beyond the API response — the actual database row.

Test Data: Faker.js + Builder Pattern

Hardcoded test data is fragile. I use @faker-js/faker to generate realistic data per test run, combined with a builder pattern that allows selective overrides:

javascript
1const buildRequest = (overrides = {}) => ({
2 product_id: validProductId,
3 brand_id: validBrandId,
4 type: faker.word.noun(),
5 usage_quantity: faker.number.int({ min: 1, max: 100 }),
6 note: faker.word.words(10),
7 product_image: "",
8 ...overrides,
9});
9 lines

Validation tests use this builder to isolate one broken field at a time:

javascript
1it("should reject missing brand_id", () => {
2 cy.AddProduct(buildRequest({ brand_id: null })).then((res) => {
3 expect(res.status).to.eq(400);
4 });
5});
5 lines

This pattern makes it trivial to generate 30+ validation test cases without copy-paste.

Global Lifecycle Hooks

javascript
1before(() => {
2 cy.task("log", "=== Starting Cypress Test Suite ===");
3});
4
5afterEach(function () {
6 const title = this.currentTest?.title || "Unknown test";
7 const state = this.currentTest?.state || "unknown";
8 cy.task("log", `Test "${title}": ${state}`);
9});
9 lines

Coverage Strategy

I track coverage across three dimensions:

WhatHow
Feature coverageManual list in coverage-report.md
Test executionLast run date + pass rate per spec
Coverage gapsExplicit list of what's manual-only and why

The current numbers:

ModuleCoverage
Auth89%
API Auth Guard100%
Landing Page100%
Inventory Dashboard100%
Product (API + UI)100%
Trading (Trade/Fee/Event)100%
Total99%

The one gap: Google OAuth UI flow. It requires a real browser redirect to Google's servers — something Cypress can't intercept without a custom OAuth mock server. I've documented it as P2 with the recommended fix (cypress-social-logins or a local mock provider).

Responsive Testing at Scale

UI tests run against three viewport sizes in a single spec:

javascript
1const viewports = [
2 { name: "desktop", width: 1280, height: 720 },
3 { name: "tablet", width: 768, height: 1024 },
4 { name: "mobile", width: 375, height: 667 },
5];
6
7viewports.forEach(({ name, width, height }) => {
8 it(`should display correctly on ${name}`, () => {
9 cy.viewport(width, height);
10 cy.visit("/inventory");
11 cy.get("[data-testid='product-list']").should("be.visible");
12 });
13});
13 lines

All data-testid attributes are registered in a central fixture file (cypress/fixtures/app-constants.json) so test IDs never drift between implementation and tests.

What the Numbers Look Like

javascript
1Total spec files: 61
2Total test cases: 1,434
3Pass rate (last run): 100%
4Feature coverage: 99% (75/76 features)
5Manual-only tests: 1 (Google OAuth)
5 lines

The dashboard UI spec alone has 88 test cases and takes ~6 minutes. The add-product spec has 100 cases covering authentication, validation, happy path, and edge cases.

What I Would Do Differently

1. Start with a test ID convention from day one. I retrofitted data-testid attributes into components as I wrote tests. It works, but it's cleaner to define them upfront in the component design phase.

2. Mock the OAuth provider earlier. Running with 89% auth coverage because of one edge case is annoying. A local OAuth mock would have closed that gap on day one.

3. Parallelize spec execution. Right now specs run sequentially. Cypress Cloud or a local parallelization setup would cut overall runtime significantly.

QA automation on a side project taught me things that production projects with dedicated QA teams often obscure: you feel every slow test, every flaky assertion, every poorly named selector directly. There's no abstraction between you and the pain. The result — a suite that catches regressions before I see them in the browser, across two domains, 61 spec files, and a real database — is something I'd never trade back for manual testing. The upfront investment pays itself off the moment you refactor a shared utility and all 1,434 tests still pass.

the testing stack

Every tool chosen with purpose — from feature to assertion.

Cypress

Modern end-to-end testing with real-time browser execution and automatic waiting

JavaScript

Used across the full stack — frontend application, test specs, and automation layer

Jenkins

runs the load tests automatically as part of the CI/CD pipeline

cahya putra ugira portfolio