Test automation is about speed, scale, and precision. As software teams ship faster, the pressure on QA increases, but manual testing can’t keep up with that speed. You need a way to validate key workflows without slowing things down.
That’s where automation testing come in to save the day. With some good automation test cases, you are going to significantly speed up your testing and reduce release time.
In this guide, I am going to show you how to write test cases for automation testing, and the best practices for it.
Let's dive right in!
At their core, automation test cases are scripted instructions that simulate user behavior and validate the outcome. They define four things:
Preconditions: What must be true before the test runs (e.g. user logged in, database seeded)
Test Steps: The sequence of interactions to simulate (e.g. fill form, click submit)
Expected Results: What the system should do (e.g. show success message, redirect to dashboard)
Actual Results: What actually happened when the script ran
But here’s the nuance: you can’t just copy-paste your manual test cases into a script. Automation has different constraints. It needs to be stable, repeatable, and code-aware.
Manual testers rely on intuition. Automation relies on clarity. So test cases must be purpose-built with automation in mind, i.e. minimal UI dependency, clear assertions, isolated flows.
📚 Read More: 9 Core Benefits of Test Automation
Good automation test cases pay off in every sprint. Here’s why:
Faster regression cycles: What used to take days now runs in hours, or even less.
Consistent results: No more human error. The same steps, the same checks, every time.
Scalability: Run the same test across browsers, devices, and environments.
Team velocity: QA shifts from bottleneck to accelerator. Developers get feedback faster, and bugs are caught earlier.
But automation should not be seen as a silver bullet. The value comes from choosing the right things to automate, not trying to automate everything.
💡 Pro tip: Focus on flows that are stable, high-impact, and repeatable. Skip anything that’s volatile or hard to assert.
Test automation should start where it makes the most impact.
Login and registration flows: Critical paths users hit daily. Simple but important.
Form validations: Check both success (valid inputs) and failure (invalid inputs).
Backend verifications: Confirm database updates or API responses — ideally through integration tests, not the UI.
Smoke and regression tests: Repetitive and essential. Perfect for automation.
Cross-browser and platform tests: Ensure consistency across environments.
These are high-leverage tests: small effort, big payoff.
Rapidly changing UIs: Automation breaks easily when the UI keeps shifting.
Highly visual features: Layout bugs, image alignment are best left to human eyes.
Exploratory scenarios: The whole point is flexibility. Don’t script curiosity.
💡 Pro tip: End-to-end UI tests are expensive to build and maintain. Use them sparingly: only for business-critical flows. For everything else, lean on unit and integration tests.
Automation thrives on clarity. Every test case should include the following:
Test Case ID: A unique identifier used to track and reference the test case.
Description: A short, specific, action-oriented statement that clearly summarizes what is being tested.
Preconditions: Conditions or system state that must be met before the test can be executed.
Test Steps: A step-by-step list of actions to perform during the test.
Test Data: Specific input values required for executing the test case.
Expected Result: The system’s expected behavior or output based on the test steps.
Actual Result: The actual behavior or output observed during test execution.
Postconditions: The system state after the test completes (optional but helpful for chaining tests).
Pass/Fail Criteria: The specific conditions that determine whether the test has passed or failed.
Let's create a test case for the Etsy login popup:
Component |
Details |
Test Case ID |
TC001 |
Description |
Verify Login with Valid Credentials |
Preconditions |
User is on the Etsy login popup |
Test Steps |
1. Enter a valid email address. 2. Enter the corresponding valid password. 3. Click the "Sign In" button. |
Test Data |
Email: validuser@example.com Password: validpassword123 |
Expected Result |
Users should be successfully logged in and redirected to the homepage or the previously intended page. |
Actual Result |
(To be filled in after execution) |
Postconditions |
User is logged in and the session is active |
Pass/Fail Criteria |
Pass: Test passes if the user is logged in and redirected correctly. Fail: Test fails if an error message is displayed or the user is not logged in. |
Comments |
Ensure the test environment has network access and the server is operational. |
Now let's turn that into a test script.
There are two approaches you can take:
Install Selenium if you haven’t already:
pip install selenium
The steps we will code include:
Here’s your script:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
# Set up Chrome options
chrome_options = Options()
chrome_options.add_argument("--headless") # Run in headless mode if you don’t need a UI
# Initialize the WebDriver
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)
try:
# Navigate to Etsy login page
driver.get("https://www.etsy.com/signin")
# Find the email and password fields and the login button
email_field = driver.find_element(By.ID, "join_neu_email_field")
password_field = driver.find_element(By.ID, "join_neu_password_field")
login_button = driver.find_element(By.XPATH, "//button[@type='submit']")
# Enter credentials (Replace with valid credentials)
email_field.send_keys("your-email@example.com")
password_field.send_keys("your-password")
# Click the login button
login_button.click()
# Add assertions or further actions as needed
# For example, check if login was successful
# account_menu = driver.find_element(By.ID, "account-menu-id")
# assert account_menu.is_displayed(), "Login failed"
finally:
# Close the browser
driver.quit()
Let’s see how you can do the same thing with Katalon, minus the coding part.
First, you can download Katalon Studio here.
Next, launch it, and go to File > New > Project to create your first test project.
Now create your first test case. Let’s call it a “Web Test Case”.
You now have a productive IDE to write automated test cases. You have 3 modes to choose from
Here's how it works:
You have a lot of environments to choose from. The cool thing is that you can execute any types of test cases across browsers and OS, and even reuse test artifacts across AUTs. This saves so much time and effort in test creation, allowing you to focus on high value strategic tasks.
Test automation is about building trust in your product at scale. A good automation test case is clear, reliable, and purpose-built. It validates what matters, runs consistently, and delivers fast feedback to your team. But it only works if you automate the right things: stable flows, repeatable tasks, and system-critical behaviors.
Manual testing catches edge cases. Automation holds the line.
With the right framework, the right test cases, and the right balance, you don’t have to choose between speed and quality. You can deliver both, confidently, at every release.