The Katalon Blog

How to Select Test Cases for Automation: A Practical Guide

Written by Vincent N. | May 9, 2025 5:59:10 AM

Test automation is essential if you want to move fast without breaking things. But here’s the hard truth: not every test is worth automating. And trying to automate everything is how teams burn time, introduce flakiness, and end up maintaining tests that add zero value.

So how do you know what test cases to automate?

That’s what this guide is for. We’ll walk through the key criteria to evaluate your test cases, the types of tests that actually deliver ROI when automated, and the ones you should skip (for now). We’ll also introduce a simple but powerful tool: the Test Case Selection Matrix to help you make those calls with clarity, not guesswork.

Why test case selection matters

Let’s get one thing straight: Automation ≠ testing everything.

That mindset is how teams end up drowning in a sea of flaky tests and false confidence.

The goal of automation isn’t coverage for the sake of coverage. It’s about speed, stability, and return on effort. Every test you automate should save time, reduce risk, or catch bugs faster than a manual alternative. If it doesn’t, you’re wasting cycles.

Here is the hidden costs of automating the wrong test cases:

  • Brittle UI tests that break every time a button shifts 5 pixels to the left? Time sink.

  • End-to-end tests that rely on unstable data or third-party dependencies? Maintenance nightmare.

  • Tests that never fail and rarely run? They’re just noise in your CI pipeline.

Every automated test becomes a piece of code your team has to maintain. Multiply that by hundreds, and bad choices add up fast, slowing down releases instead of accelerating them.

📚 Read More: A Practical Guide on Test Automation

Types of test cases ideal for automation

If you’re going to automate, automate with intent. Focus on test cases that are:

  • High-frequency: Think regression, smoke, and sanity tests. If you run it every sprint or every pull request, it’s a strong candidate.

  • Stable and predictable: Avoid automating things that change constantly or behave inconsistently.

  • Business-critical: Core flows like login, checkout, or API contract. They are things you must know are working before release.

  • Data-driven: Tests where you can reuse logic across many input sets without rewriting.

  • Time-consuming to do manually: Tedious flows that eat up hours each release cycle? Perfect for automation.

Automation works best when it's consistent, reliable, and part of your delivery rhythm.

Tests you should NOT automate (or delay)

Not everything belongs in your automation suite. Here are the kinds of tests you should keep manual, for now or forever:

  • One-time or rarely run tests

  • Exploratory and UX-focused tests

  • Highly unstable features or UI elements

  • Tests requiring physical devices or complex hardware

These are the genuinely fun parts of testing. Why automate the fun and creative part? After all, automation is basically using the power of machines to leave more space for us humans to do creative work. Let's all let the machines do their work, while we take care of ours.

Criteria for selecting test cases for automation

  • Repeatability & frequency
    Is this test run often and on a regular basis (e.g., every sprint or release)? Automate high-frequency tests like regression, smoke, or sanity checks.

  • Stability
    Is the feature under test stable and unlikely to change soon? Avoid automating areas that are still evolving or frequently updated.

  • Determinism
    Does the test produce consistent, predictable results? Flaky, data-dependent, or timing-sensitive tests are poor automation candidates.

  • Criticality
    Would failure in this area severely impact users or business operations? Automate tests that guard mission-critical flows like payments or logins.

  • Complexity vs. effort
    Is the automation effort justified by the value it brings? Skip test cases that require heavy setup but offer little return.

  • Data-driven potential
    Can the test logic stay the same while running with multiple data sets? If yes, it's a strong candidate for parameterized or data-driven automation.

  • Test independence
    Can this test run on its own without relying on other tests? Independent tests are more reliable and easier to debug and maintain.

  • Setup and teardown feasibility
    Can the test environment be reliably set up and torn down? Tests that depend on brittle or hard-to-reproduce setups are poor automation candidates.

  • UI stability
    Is the UI under test stable with minimal layout or DOM changes? Avoid automating UI tests in frequently changing interfaces to reduce maintenance.

  • Cross-platform relevance
    Does this test need to run across multiple devices, browsers, or OSs? Automate when coverage across platforms is important and manual execution is inefficient.

  • Reusability of components
    Can parts of this test be reused in other automated scenarios? Tests that can share utilities, data models, or fixtures increase long-term efficiency.

Test case selection matrix 

Test case selection doesn’t have to rely on gut instinct. You don’t need to guess which tests are worth automating, and you shouldn’t. That’s where the Test Case Selection Matrix comes in.

1. What is test case selection matrix and why it helps?

It’s a simple scoring model that helps you evaluate and prioritize test cases based on real criteria—not hunches. It forces alignment around what’s actually worth automating by looking at the stuff that matters: how often the test runs, how critical the feature is, how reusable the test logic is, and how painful it is to do manually.

The result? A clear picture of where your automation effort will deliver the highest ROI—and what you should leave alone.

2. How test case selection matrix works

  • Assign a score (0–1) to each factor:
    Run Frequency, Stability, Business Criticality, Reusability, and Manual Effort

  • Tally up the total score for each test case

  • Set a baseline threshold—e.g., 3.5 or above = good candidate for automation

  • Use this to guide backlog grooming, sprint planning, or automation roadmaps

🧾 Sample test case selection matrix

Take a look at this sample test case selection matrix:

Test Case Run Frequency Stability Business Critical Reusability Manual Effort Automation Score (0–5) Automate?
Login Flow High Yes Yes High High 5 ✅ Yes
Newsletter Popup Style Low No Low Low Low 1 ❌ No

The Login Flow is a textbook example of a good candidate for automation:

  • It runs every sprint (if not every commit).

  • The logic is stable.

  • It’s critical—if login breaks, users are locked out.

  • The steps are reusable across other tests (e.g., login before checkout).

  • Manually repeating it is tedious and error-prone.

Score: 5/5. No brainer. Automate it, monitor it, rely on it.

The Newsletter Popup Style is the opposite:

  • The feature doesn’t change much or impact core workflows.

  • Visual/UI elements shift often (making it brittle).

  • It adds little value if broken—it’s not revenue-generating.

  • Test logic isn’t reusable elsewhere.

  • Manual testing here is fast and simple.

Score: 1/5. Automating this would cost more in maintenance than it saves in effort. Keep it manual or fold it into exploratory testing.