The Katalon Blog

How to Write Test Cases for Automation Testing?

Written by Vincent N. | Apr 23, 2025 7:10:12 AM

Test automation is about speed, scale, and precision. As software teams ship faster, the pressure on QA increases, but manual testing can’t keep up with that speed. You need a way to validate key workflows without slowing things down.

That’s where automation testing come in to save the day. With some good automation test cases, you are going to significantly speed up your testing and reduce release time.

In this guide, I am going to show you how to write test cases for automation testing, and the best practices for it.

Let's dive right in!

What are automation test cases?

At their core, automation test cases are scripted instructions that simulate user behavior and validate the outcome. They define four things:

  • Preconditions: What must be true before the test runs (e.g. user logged in, database seeded)

  • Test Steps: The sequence of interactions to simulate (e.g. fill form, click submit)

  • Expected Results: What the system should do (e.g. show success message, redirect to dashboard)

  • Actual Results: What actually happened when the script ran

But here’s the nuance: you can’t just copy-paste your manual test cases into a script. Automation has different constraints. It needs to be stable, repeatable, and code-aware.

Manual testers rely on intuition. Automation relies on clarity. So test cases must be purpose-built with automation in mind, i.e. minimal UI dependency, clear assertions, isolated flows.

📚 Read More: 9 Core Benefits of Test Automation

Why writing good automation test cases matters

Good automation test cases pay off in every sprint. Here’s why:

  • Faster regression cycles: What used to take days now runs in hours, or even less.

  • Consistent results: No more human error. The same steps, the same checks, every time.

  • Scalability: Run the same test across browsers, devices, and environments.

  • Team velocity: QA shifts from bottleneck to accelerator. Developers get feedback faster, and bugs are caught earlier.

But automation should not be seen as a silver bullet. The value comes from choosing the right things to automate, not trying to automate everything.

💡 Pro tip: Focus on flows that are stable, high-impact, and repeatable. Skip anything that’s volatile or hard to assert.

How to decide what to automate?

Test automation should start where it makes the most impact.

Best test scenarios for automation

  • Login and registration flows: Critical paths users hit daily. Simple but important.

  • Form validations: Check both success (valid inputs) and failure (invalid inputs).

  • Backend verifications: Confirm database updates or API responses — ideally through integration tests, not the UI.

  • Smoke and regression tests: Repetitive and essential. Perfect for automation.

  • Cross-browser and platform tests: Ensure consistency across environments.

These are high-leverage tests: small effort, big payoff.

What NOT to automate

  • Rapidly changing UIs: Automation breaks easily when the UI keeps shifting.

  • Highly visual features: Layout bugs, image alignment are best left to human eyes.

  • Exploratory scenarios: The whole point is flexibility. Don’t script curiosity.

💡 Pro tip: End-to-end UI tests are expensive to build and maintain. Use them sparingly: only for business-critical flows. For everything else, lean on unit and integration tests.

Key elements of an automation test case

Automation thrives on clarity. Every test case should include the following:

  • Test Case ID: A unique identifier used to track and reference the test case.

  • Description: A short, specific, action-oriented statement that clearly summarizes what is being tested.

  • Preconditions: Conditions or system state that must be met before the test can be executed.

  • Test Steps: A step-by-step list of actions to perform during the test.

  • Test Data: Specific input values required for executing the test case.

  • Expected Result: The system’s expected behavior or output based on the test steps.

  • Actual Result: The actual behavior or output observed during test execution.

  • Postconditions: The system state after the test completes (optional but helpful for chaining tests).

  • Pass/Fail Criteria: The specific conditions that determine whether the test has passed or failed.

How to create an automation test case

Let's create a test case for the Etsy login popup:

Component

Details

Test Case ID

TC001

Description

Verify Login with Valid Credentials

Preconditions

User is on the Etsy login popup

Test Steps

1. Enter a valid email address.

2. Enter the corresponding valid password.

3. Click the "Sign In" button.

Test Data

Emailvaliduser@example.com

Password: validpassword123

Expected Result

Users should be successfully logged in and redirected to the homepage or the previously intended page.

Actual Result

(To be filled in after execution)

Postconditions

User is logged in and the session is active

Pass/Fail Criteria

Pass: Test passes if the user is logged in and redirected correctly.

Fail: Test fails if an error message is displayed or the user is not logged in.

Comments

Ensure the test environment has network access and the server is operational.


Now let's turn that into a test script.

There are two approaches you can take:

  • Either you use a test automation framework to write the script in a certain programming language.
  • Or you use a test automation tool with features to help you build test cases from scratch without having to code much.

1. Write a test case in Selenium

Install Selenium if you haven’t already:

Terminal
pip install selenium

The steps we will code include:

  1. Set up Chrome to run in headless mode (no GUI). This is optional.
  2. Initialize WebDriver 
  3. Use “driver.get(url)” to open the Etsy login page
  4. Locate the web elements using its ID and XPath
  5. Enter credentials
  6. Submit the form
  7. Check status

Here’s your script:

Python
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager

# Set up Chrome options
chrome_options = Options()
chrome_options.add_argument("--headless")  # Run in headless mode if you don’t need a UI

# Initialize the WebDriver
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)

try:
    # Navigate to Etsy login page
    driver.get("https://www.etsy.com/signin")

    # Find the email and password fields and the login button
    email_field = driver.find_element(By.ID, "join_neu_email_field")
    password_field = driver.find_element(By.ID, "join_neu_password_field")
    login_button = driver.find_element(By.XPATH, "//button[@type='submit']")

    # Enter credentials (Replace with valid credentials)
    email_field.send_keys("your-email@example.com")
    password_field.send_keys("your-password")

    # Click the login button
    login_button.click()

    # Add assertions or further actions as needed
    # For example, check if login was successful
    # account_menu = driver.find_element(By.ID, "account-menu-id")
    # assert account_menu.is_displayed(), "Login failed"

finally:
    # Close the browser
    driver.quit()

2. Write a test case with Katalon

Let’s see how you can do the same thing with Katalon, minus the coding part.

First, you can download Katalon Studio here.

Next, launch it, and go to File > New > Project to create your first test project.


Now create your first test case. Let’s call it a “Web Test Case”.

You now have a productive IDE to write automated test cases. You have 3 modes to choose from

  • No-code: Using the Record-and-Playback feature, testers can capture their manual actions on-screen and convert them into automated test scripts that can be re-run as many times as needed.
  • Low-code: Katalon offers a library of Built-in Keywords, which are pre-written code snippets with customizable parameters for specific actions. For instance, a "Click" keyword handles the internal logic to find and click an element (like a button). Testers only need to specify the element, without dealing with the underlying code.
  • Full-code: Testers can switch to Scripting mode to write their own test scripts from scratch. They can also toggle between no-code and low-code modes as needed. This approach combines the ease of point-and-click testing with the flexibility of full scripting, allowing testers to focus on what to test rather than how to write the tests, thus boosting productivity.

Here's how it works:

You have a lot of environments to choose from. The cool thing is that you can execute any types of test cases across browsers and OS, and even reuse test artifacts across AUTs. This saves so much time and effort in test creation, allowing you to focus on high value strategic tasks.

The bottom line

Test automation is about building trust in your product at scale. A good automation test case is clear, reliable, and purpose-built. It validates what matters, runs consistently, and delivers fast feedback to your team. But it only works if you automate the right things: stable flows, repeatable tasks, and system-critical behaviors.

Manual testing catches edge cases. Automation holds the line.

With the right framework, the right test cases, and the right balance, you don’t have to choose between speed and quality. You can deliver both, confidently, at every release.