Drive QA's evolution—share your voice and win prizes up to $3,000
TAKE THE SURVEY
All All News Products Insights DevOps and CI/CD Community
Table of Contents

How To Write Test Cases? Detailed Guide With Examples

A test case is the backbone of any testing project.

 

The key to effective software testing is not just how you write test cases, but what scenarios you choose to cover. Once the right test cases are created, they should be closely aligned with the overall test design and test execution strategy to ensure comprehensive and efficient testing.
 

Let’s explore how to write test cases in the most strategic fashion.

What is a Test Case?

Feature img (in web) (5)

A test case is a specific set of conditions or variables under which a tester determines whether a system, software application, or one of its features is working as intended.

 

Here’s an example. You are testing the Login pop-up of Etsy, one of the leading E-commerce platforms currently. You’ll need several test cases to check if all features of this page are working smoothly. 

Brainstorming time: what features do you want to test for this Login pop-up?

how to write test cases for Login page testing on Etsy

Let’s list down some of them:

  1. Verify Login with Valid Credentials
  2. Verify Login with Invalid Credentials
  3. Verify "Forgot Password" Functionality
  4. Verify "Trouble Signing In" Link
  5. Verify "Continue with Google" Functionality
  6. Verify "Continue with Facebook" Functionality
  7. Verify "Continue with Apple" Functionality
  8. Verify "Register" Button Redirect
  9. Verify Empty Email Field Handling
  10. Verify Empty Password Field Handling
  11. Verify Error Message for Unregistered Email
  12. Verify Session Timeout Handling
     

That’s just a quick and immediate list. As a matter of fact, the more complex a system is, the more test cases are needed.

 

Learn More: 100+ Test Cases For The Login Page That You Will Need

Components of a Test Case

A test case usually includes:

  1. Test Case ID – A unique identifier for tracking the test case.
  2. Test Description – A brief summary of what the test will verify.
  3. Preconditions – The required setup or conditions before the test can be run.
  4. Test Steps – A list of actions to perform during the test.
  5. Expected Result – The outcome the system should produce if it works correctly.
  6. Actual Result – The outcome observed during testing.
  7. Status – Whether the test passed, failed, or is still in progress.
  8. Comments/Notes – Any additional observations or details about the test.
 

banner5.png

Test Case Template Example

Here is an example of a login test case for the Etsy login popup:

Component

Details

Test Case ID

TC001

Description

Verify Login with Valid Credentials

Preconditions

User is on the Etsy login popup

Test Steps

1. Enter a valid email address.

2. Enter the corresponding valid password.

3. Click the "Sign In" button.

Test Data

Emailvaliduser@example.com

Password: validpassword123

Expected Result

Users should be successfully logged in and redirected to the homepage or the previously intended page.

Actual Result

(To be filled in after execution)

Postconditions

User is logged in and the session is active

Pass/Fail Criteria

Pass: Test passes if the user is logged in and redirected correctly.

Fail: Test fails if an error message is displayed or the user is not logged in.

Comments

Ensure the test environment has network access and the server is operational.

Best Practices When Writing a Test Case

Make sure to follow the following practices when writing your test cases:

Component

Best Practices

Test Case ID

1. Use a consistent naming convention.

2. Ensure IDs are unique.

3. Use a prefix indicating the module/feature.

4. Keep it short but descriptive.

5. Maintain a central repository for all test case IDs.

Description

1. Be concise and clear.

2. Clearly state the purpose of the test.

3. Make it understandable for anyone reading the test case.

4. Include the expected behavior or outcome.

5. Avoid technical jargon and ambiguity.

Preconditions

1. Clearly specify setup requirements.

2. Ensure all necessary conditions are met.

3. Include relevant system or environment states.

4. Detail any specific user roles or configurations needed.

5. Verify preconditions before test execution.

Test Steps

1. Number each step sequentially.

2. Write steps clearly and simply.

3. Use consistent terminology and actions.

4. Ensure steps are reproducible.

5. Avoid combining multiple actions into one step.

Test Data

1. Use realistic and valid data.

2. Clearly specify each piece of test data.

3. Avoid hardcoding sensitive information.

4. Utilize data-driven testing for scalability.

5. Store test data separately from test scripts.

Expected Result

1. Be specific and clear about the outcome.

2. Include UI changes, redirects, and messages.

3. Align with the acceptance criteria.

4. Cover all aspects of the functionality being tested.

5. Make results measurable and observable.

Actual Result

1. Document the actual outcome during execution.

2. Provide detailed information on discrepancies.

3. Include screenshots or logs if applicable.

4. Use consistent format for recording results.

5. Verify results against the expected outcomes.

Postconditions

1. Specify the expected system state post-test.

2. Include any necessary cleanup steps.

3. Ensure the system is stable for subsequent tests.

4. Verify that changes made during the test are reverted if needed.

5. Document any residual effects on the environment.

Pass/Fail Criteria

1. Clearly define pass/fail conditions.

2. Use measurable and observable outcomes.

3. Ensure criteria are objective.

4. Include specific error messages or behaviors for fails.

5. Align criteria with expected results and requirements.

Comments

1. Include additional helpful information.

2. Note assumptions, dependencies, or constraints.

3. Provide troubleshooting tips.

4. Record any deviations from the standard process.

5. Mention any special instructions for executing the test.

Here are some more tips for you:

  1. Keep your test cases isolated in the sense that each test case should aim to test one specific instance of the scenario. This helps with your debugging later down the road.
  2. Leverage a test case management system that syncs your test execution with test management and reporting.
  3. Conduct an exploratory testing session first to gain a comprehensive understanding of the system you are testing. This helps you know what test cases to work on.
  4. Leverage the Gherkin format (Given, When, Then) and the BDD testing methodology to simplify test scenarios into human readable language so other stakeholders can join in the conversation.

How To Write an Automation Test Script?

Writing manual test cases is primarily about noting down the test steps. 

When it comes to automation testing, the process becomes more complicated.

If you choose manual testing, you only have to execute them following the exact steps as planned. If you go with automation testing, first you need to choose whether you’ll go with a framework or a testing tool.
 

Simply put:

1. Write a test case in Selenium

Selenium logo

Install Selenium if you haven’t already:

 

pip install selenium

The steps we will code include:

  1. Set up Chrome to run in headless mode (no GUI). This is totally optional.
  2. Initialize WebDriver 
  3. Use “driver.get(url)” to open the Etsy login page
  4. Locate the web elements using its ID and XPath
  5. Enter credentials
  6. Submit the form
  7. Check status

 

Here’s your script:
 

from selenium import webdriver

from selenium.webdriver.common.by import By

from selenium.webdriver.common.keys import Keys

from selenium.webdriver.chrome.service import Service

from selenium.webdriver.chrome.options import Options

from webdriver_manager.chrome import ChromeDriverManager
 

# Set up Chrome options

chrome_options = Options()

chrome_options.add_argument("--headless")  # Run in headless mode if you don’t need a UI
 

# Initialize the WebDriver

driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)
 

try:

    # Navigate to Etsy login page

    driver.get("https://www.etsy.com/signin")
 

    # Find the email and password fields and the login button

    email_field = driver.find_element(By.ID, "join_neu_email_field")

    password_field = driver.find_element(By.ID, "join_neu_password_field")

    login_button = driver.find_element(By.XPATH, "//button[@type='submit']")
 

    # Enter credentials (Replace with valid credentials)

    email_field.send_keys("your-email@example.com")

    password_field.send_keys("your-password")
 

    # Click the login button

    login_button.click()
 

    # Add assertions or further actions as needed

    # For example, check if login was successful

    # WebElement account_menu = driver.find_element(By.ID, "account-menu-id")

    # assert account_menu.is_displayed(), “Login failed”
 

finally:

    # Close the browser

    driver.quit()

2. Write a test case with Katalon

Katalon logo

Let’s see how you can do the same thing with Katalon, minus the coding part.
 

First, you can download Katalon Studio here.
 

Next, launch it, and go to File > New > Project to create your first test project.  
 

create a testing project on Katalon
 

Now create your first test case. Let’s call it a “Web Test Case”.

 

create a web test case

You now have a productive IDE to write automated test cases. You have 3 modes to choose from

  • No-code: Using the Record-and-Playback feature, testers can capture their manual actions on-screen and convert them into automated test scripts that can be re-run as many times as needed.
  • Low-code: Katalon offers a library of Built-in Keywords, which are pre-written code snippets with customizable parameters for specific actions. For instance, a "Click" keyword handles the internal logic to find and click an element (like a button). Testers only need to specify the element, without dealing with the underlying code.
  • Full-code: Testers can switch to Scripting mode to write their own test scripts from scratch. They can also toggle between no-code and low-code modes as needed. This approach combines the ease of point-and-click testing with the flexibility of full scripting, allowing testers to focus on what to test rather than how to write the tests, thus boosting productivity.

Here's how it works:

You have a lot of environments to choose from. The cool thing is that you can execute any types of test cases across browsers and OS, and even reuse test artifacts across AUTs. This saves so much time and effort in test creation, allowing you to focus on high value strategic tasks.

 

FAQs On Writing Test Cases

1. What is the difference between Test Case vs. Test Scenario?

Test Case:

  • Definition: A test case is a detailed document that describes a specific condition to test, the steps to execute the test, the input data, the expected result, and the actual result after execution.
  • Components: Typically includes Test Case ID, Description, Preconditions, Test Steps, Test Data, Expected Result, Actual Result, Postconditions, Pass/Fail Criteria, and Comments.
  • Detail Level: Highly detailed and specific, designed to ensure that every aspect of the feature is tested thoroughly.
  • Purpose: To verify that a particular function of the application behaves as expected under specified conditions.

Test Scenario:

  • Definition: A test scenario is a high-level description of what needs to be tested. It represents a user journey or a functional aspect of the application that needs to be verified.
  • Components: Usually includes a scenario ID and a brief description of the functionality to be tested.
  • Detail Level: Less detailed than a test case, more abstract, and focused on the end-to-end functionality rather than specific inputs and outputs.
  • Purpose: To ensure that all possible user workflows and business processes are covered.

 

2. How to write test cases in Agile?

Writing test cases in Agile involves adapting to the iterative and incremental nature of the methodology. Here are some best practices:

  1. Collaborate Early and Often:
    • Work closely with the development team, product owner, and stakeholders to understand requirements.
    • Participate in sprint planning and grooming sessions to gain insights into user stories and acceptance criteria.
  2. Write Test Cases Concurrently:
    • Develop test cases as user stories are being defined and refined.
    • Use acceptance criteria as a guide to write initial test cases.
  3. Keep Test Cases Lightweight:
    • Focus on the essence of what needs to be tested.
    • Write high-level test cases initially and add details as needed.
  4. Use User Stories:
    • Align test cases with user stories to ensure all aspects of the functionality are covered.
    • Write test cases that reflect real user scenarios and interactions.
  5. Prioritize Test Cases:
    • Prioritize test cases based on the criticality and risk of the features being tested.
    • Focus on high-priority test cases first to ensure core functionality is tested early.
  6. Automate Where Possible:
    • Identify repetitive and high-impact test cases for automation.
    • Integrate automated tests into the CI/CD pipeline for continuous feedback.
  7. Review and Update Regularly:
    • Continuously review and update test cases based on feedback and changes in requirements.
    • Involve the team in test case reviews to ensure coverage and accuracy.

 

3. What are types of test cases?

There are several types of test cases, each serving a different purpose in the testing process:

  1. Functional Test Cases: verify the functionality of the software according to the requirements.
  2. Non-Functional Test Cases: assess aspects like performance, usability, reliability, and scalability.
  3. Regression Test Cases: ensure that new changes or enhancements have not adversely affected existing functionalities.
  4. Smoke Test Cases: perform a basic check to ensure that the critical functionalities of the application are working.
  5. Sanity Test Cases: focus on a narrow area of functionality to ensure that specific changes or bug fixes are working as intended.
  6. Integration Test Cases: verify that different modules or components of the application work together as expected.

banner9.png

Click