Manual Testing

How I Write Test Cases (My Personal Format & Real-World Guide)

How I Write Test Cases (My Personal Format & Real-World Guide)

I've been doing QA work long enough to know that the quality of your test cases directly reflects the quality of what ships to users. Early in my career, I wrote test cases just to have documentation — something to check off. But over time, I realized that mindset was completely wrong. Test cases aren't paperwork. They're your first line of defense against broken experiences. I used to write test cases just to have documentation. Now I write them to find bugs before users do. That shift in mindset changed everything — I became more curious, more thorough, and a lot less surprised when something breaks. A well-written test case isn't just a quality artifact, it's a form of communication between you, the developers, and the product team.

the implementation

Start With Understanding the Feature

Before writing a single test case, I make sure I actually understand what the feature is supposed to do. I read the requirements, check the design, and if something is unclear, I ask. Assumptions are the fastest way to write test cases that miss the point entirely.

Cover the Happy Path First

The happy path is the scenario where everything works as expected — valid input, normal conditions, expected output. I always start here because it's the baseline. If the happy path is broken, nothing else matters.

Then Cover the Edge Cases

Once the happy path is covered, I move to edge cases — empty inputs, maximum character limits, invalid formats, unauthorized access, and boundary values. These are the scenarios that developers often don't think about during implementation, which makes them exactly the scenarios most likely to have bugs.

Keep Each Test Case Focused

One test case should test one thing. If a test case is trying to verify three different behaviors at once, it becomes hard to maintain and hard to debug when it fails. I keep each case narrow and specific — a clear precondition, a single action, and an expected result.

Write the Expected Result Clearly

A test case without a clear expected result is just a checklist of steps. I always define exactly what "pass" looks like — the specific response, status code, UI state, or data that should appear. This also makes it much easier when someone else needs to execute or review the test.

Don't Forget Negative Cases

Negative testing is often underrated. These are cases where the system should reject something — invalid credentials, missing required fields, unsupported file types. The system should fail gracefully, not crash or expose sensitive errors.

Revisit and Update Regularly

Test cases aren't a one-time artifact. Whenever a feature changes or a new bug is found in production, I go back and update the related test cases. A production bug is almost always a sign that a test case is missing or outdated.

Good test cases start long before you open a spreadsheet or a testing tool. They start with genuinely understanding the feature and putting yourself in the user's shoes. From there, cover the happy path, dig into edge cases, and never skip negative scenarios. Keep each case focused on one thing, always define a clear expected result, and treat your test cases as living documents that grow with the product. The goal was never just documentation — it's about catching bugs before users do, and making sure the whole team ships with confidence.

the testing stack

Every tool chosen with purpose — from feature to assertion.

Google Sheets

for quick, lightweight documentation especially in early project stages where a full test management tool feels like overkill.

Linear

for linking failed test cases to bug reports and tracking them through the resolution process.

Qase.io

for managing test cases, running test cycles, and tracking results across builds. It keeps everything organized and easy to share with the team.

cahya putra ugira portfolio