Test case is the backbone of any testing project. The art of software testing is to write the right test case. Firstly it’s not about how you write it, but rather what scenarios you are writing for. After that, we need to closely tie our test cases with test design and test execution.
Let’s explore how to write test cases in the most strategic fashion.
A test case is a specific set of conditions or variables under which a tester determines whether a system, software application, or one of its features is working as intended.
Here’s an example. You are testing the Login pop-up of Etsy, one of the leading E-commerce platforms currently. You’ll need several test cases to check if all features of this page are working smoothly.
Brainstorming time: what features do you want to test for this Login pop-up?
Let’s list down some of them:
That’s just a quick and immediate list. As a matter of fact, the more complex a system is, the more test cases are needed.
Learn More: 100+ Test Cases For The Login Page That You Will Need
Before you write a test case, ask yourself 3 questions:
Once all of those 3 questions have been answered, you can start the test case design and eventually test authoring. It’s safe to say that 80% of writing a test case belongs to the planning and designing part, and only 20% is actually scripting. Good test case design is key to achieving good test coverage.
Let’s start with the first steps.
Why we need to design before we write though?
It's simple. There are way more things to test than it appears. Taking from the example above, the Login page alone has required you 10 test cases to almost cover all scenarios. You need techniques to list down all test cases in a given scenario before you start writing those tests.
First question: do you have access to the internal code?
You are doing black box testing if you don’t have access to internal code. The entire system is essentially a black box. You can only see and test what the system is programmed to show you.
When testers don’t need to understand the algorithm, they can concentrate on determining whether the software meets user expectations. They must explore and learn about the system to generate test case ideas. However, this approach can result in limited test coverage, as some features with non-obvious behavior might be overlooked.
In that case, here are some techniques for you to design your test cases:
1. Equivalence Class Testing: you divide input data into groups where all values in a group should be treated the same way by the system.
2. Boundary Value Analysis: this is a more granular version of equivalence class testing. Here you test values at the edges of input ranges to find errors at the boundaries.
3. Decision Table Testing: you use a table to test different combinations of input conditions and their corresponding actions or results.
Rule | Rule-1 | Rule-2 | Rule-3 | Rule-4 | Rule-5 | Rule-6 | |
Conditions | Credit Score | High | High | Medium | Medium | Low | Low |
Income | High | Low | High | Low | High | Low | |
Actions | Loan Approval | Yes | Yes | Yes | No | No | No |
Interest Rate | Low | Medium | Medium | N/A | N/A | N/A |
You are doing white box testing if you have access to the internal code. Test case design with white box testing allows you to deep dive into the implementation paths through the system. Now that you have internal knowledge of how the system works, you can tailor test cases specifically to its logic.
With white box testing, you would need to build a Control Flow Graph (CFG) to illustrate all of the possible scenarios that can happen for a specific feature. For example, in this CFG, you can see that there are 3 execution paths, which means there are 3 test cases to write:
Once you have designed your test cases, it’s time to note them down. The real work begins here.
The anatomy of a test case consists of:
Here is an example of a login test case for the Etsy login popup:
Component | Details |
Test Case ID | TC001 |
Description | Verify Login with Valid Credentials |
Preconditions | User is on the Etsy login popup |
Test Steps | 1. Enter a valid email address. 2. Enter the corresponding valid password. 3. Click the "Sign In" button. |
Test Data | Email: validuser@example.com Password: validpassword123 |
Expected Result | Users should be successfully logged in and redirected to the homepage or the previously intended page. |
Actual Result | (To be filled in after execution) |
Postconditions | User is logged in and the session is active |
Pass/Fail Criteria | Pass: Test passes if the user is logged in and redirected correctly. Fail: Test fails if an error message is displayed or the user is not logged in. |
Comments | Ensure the test environment has network access and the server is operational. |
Make sure to follow the following practices when writing your test cases:
Component | Best Practices |
Test Case ID | 1. Use a consistent naming convention. 2. Ensure IDs are unique. 3. Use a prefix indicating the module/feature. 4. Keep it short but descriptive. 5. Maintain a central repository for all test case IDs. |
Description | 1. Be concise and clear. 2. Clearly state the purpose of the test. 3. Make it understandable for anyone reading the test case. 4. Include the expected behavior or outcome. 5. Avoid technical jargon and ambiguity. |
Preconditions | 1. Clearly specify setup requirements. 2. Ensure all necessary conditions are met. 3. Include relevant system or environment states. 4. Detail any specific user roles or configurations needed. 5. Verify preconditions before test execution. |
Test Steps | 1. Number each step sequentially. 2. Write steps clearly and simply. 3. Use consistent terminology and actions. 4. Ensure steps are reproducible. 5. Avoid combining multiple actions into one step. |
Test Data | 1. Use realistic and valid data. 2. Clearly specify each piece of test data. 3. Avoid hardcoding sensitive information. 4. Utilize data-driven testing for scalability. 5. Store test data separately from test scripts. |
Expected Result | 1. Be specific and clear about the outcome. 2. Include UI changes, redirects, and messages. 3. Align with the acceptance criteria. 4. Cover all aspects of the functionality being tested. 5. Make results measurable and observable. |
Actual Result | 1. Document the actual outcome during execution. 2. Provide detailed information on discrepancies. 3. Include screenshots or logs if applicable. 4. Use consistent format for recording results. 5. Verify results against the expected outcomes. |
Postconditions | 1. Specify the expected system state post-test. 2. Include any necessary cleanup steps. 3. Ensure the system is stable for subsequent tests. 4. Verify that changes made during the test are reverted if needed. 5. Document any residual effects on the environment. |
Pass/Fail Criteria | 1. Clearly define pass/fail conditions. 2. Use measurable and observable outcomes. 3. Ensure criteria are objective. 4. Include specific error messages or behaviors for fails. 5. Align criteria with expected results and requirements. |
Comments | 1. Include additional helpful information. 2. Note assumptions, dependencies, or constraints. 3. Provide troubleshooting tips. 4. Record any deviations from the standard process. 5. Mention any special instructions for executing the test. |
Here are some more tips for you:
Writing manual test cases is primarily about noting down the test steps.
When it comes to automation testing, the process becomes more complicated.
If you choose manual testing, you only have to execute them following the exact steps as planned. If you go with automation testing, first you need to choose whether you’ll go with a framework or a testing tool.
Simply put:
For example, you can use Selenium to automate the Login page testing of Etsy. Carefully read through Selenium documentation to gain an understanding of its syntax. After that, launch your favorite IDE. Python is usually the language of choice for Selenium automation.
Install Selenium if you haven’t already:
pip install selenium
Here’s your script:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
# Set up Chrome options
chrome_options = Options()
chrome_options.add_argument("--headless") # Run in headless mode if you don’t need a UI
# Initialize the WebDriver
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)
try:
# Navigate to Etsy login page
driver.get("https://www.etsy.com/signin")
# Find the email and password fields and the login button
email_field = driver.find_element(By.ID, "join_neu_email_field")
password_field = driver.find_element(By.ID, "join_neu_password_field")
login_button = driver.find_element(By.XPATH, "//button[@type='submit']")
# Enter credentials (Replace with valid credentials)
email_field.send_keys("your-email@example.com")
password_field.send_keys("your-password")
# Click the login button
login_button.click()
# Add assertions or further actions as needed
# For example, check if login was successful
# WebElement account_menu = driver.find_element(By.ID, "account-menu-id")
# assert account_menu.is_displayed(), “Login failed”
finally:
# Close the browser
driver.quit()
The steps we coded here are:
Here are some best practices you should follow:
Let’s see how you can do the same thing with Katalon, minus the coding part.
First, you can download Katalon Studio here.
Next, launch it, and go to File > New > Project to create your first test project.
Now create your first test case. Let’s call it a “Web Test Case”.
You now have a productive IDE to write automated test cases. You have 3 modes to choose from
Here's how it works:
You have a lot of environments to choose from. The cool thing is that you can execute any types of test cases across browsers and OS, and even reuse test artifacts across AUTs. This saves so much time and effort in test creation, allowing you to focus on high value strategic tasks.
Test Case:
Test Scenario:
Writing test cases in Agile involves adapting to the iterative and incremental nature of the methodology. Here are some best practices:
There are several types of test cases, each serving a different purpose in the testing process: