Drive QA's evolution—share your voice and win prizes up to $3,000
TAKE THE SURVEY
All All News Products Insights DevOps and CI/CD Community
Table of Contents

Test Case Design Techniques: The Definitive Guide

 

The first step is often the hardest, and in software testing, this is especially true. When presented with a system, how do you go about and decide what to test? Standing in the middle of the unknown, we need guidance, and learning about test case design techniques is a good place to start.
 

In this article, we’ll show you:

  1. 2 types of software system: black-box and white-box, each requiring a unique approach to test case design
  2. Techniques to design test case for each system
  3. Best practices for each types of system

 Let’s dive in!

 

Black-box System vs White-box System

Types of software systems you should know to choose the right test case design techniques
 

To design a test case, you first need to consider the opacity of the system.
 

At one end of the spectrum, we have a black-box system. Testers don’t know its internal code structure or internal mechanism. All they know is what the system can accomplish and what we can put in it. To an end-user without technical knowledge, the system is a black box.
 

At the other end of the spectrum, we have a white-box system. Testers have full knowledge of its internal code structure. 

 

In the middle we have the gray box. Testers have partial knowledge of the internal code structure. In practice, most systems in real-world applications can be considered a “gray box”. 
 

For example, simply by navigating to any random website and using its features, you are interacting with a black-box system. However, as soon as you right-click on a blank space on the screen and choose Inspect, you’ll get to see the HTML code behind it. The website is now a gray box to you.
 

To the web developers behind the website, it is a white box, since they have full access to the source code.
 

Most systems in real-world applications are gray box systems, which requires a combination of both black box and white box test case design techniques
 

Test Case Design Techniques For Black-box System

1. Equivalence Class Testing

Black box testing technique: equivalence class testing

 

Equivalence Class Testing, also called Equivalence Partitioning, is a black-box testing technique designed to minimize the number of test cases while ensuring adequate coverage.

In this method, input data is divided into equivalence classes or partitions. Each class represents a group of inputs that the system should handle in the same way.

From each equivalence class, one or more representative values are selected for testing. These values are expected to produce the same result as any other value in the same class, reducing the need to test every possible input.

There are typically two types of equivalence classes:

  • Valid Equivalence Class: Inputs that the system should accept and process correctly.
  • Invalid Equivalence Class: Inputs that the system should reject or handle differently.

Let's consider a function that validates user age for an online registration form with a valid range of 18 to 60. We should have. the following equivalence classes:

  • Valid: [18-60]
  • Invalid:
    • Ages below 18: [-∞ to 17]
    • Ages above 60: [61 to ∞]
    • Non-numeric inputs: ["abc", "#$%", etc.]

Now we can select representative values from each class:

  • Valid class: 25
  • Invalid classes:
    • Less than 18: 17
    • Greater than 60: 61
    • Non-numeric: “abc”

Resulting test cases:

  • Test Case 1: Age = 25 (Expected: Valid)
  • Test Case 2: Age = 17 (Expected: Invalid)
  • Test Case 3: Age = 61 (Expected: Invalid)
  • Test Case 4: Age = "abc" (Expected: Invalid)

There are 3 criteria for you to decide if a group of tests make a good class:

  1. They all test the same thing.
  2. If one test catches a bug, it is likely that other tests in the same group also do.
  3. If one test does not catch a bug, it is likely that other tests in the same group also do not.

What does that mean? It means that each equivalence class only needs one test case to discover all of the necessary bugs. You can create more test cases if needed, but they usually don’t find more bugs. At its core, equivalence class testing is meant to reduce the number of test cases to a more manageable level while still achieving an acceptable level of test coverage.
 

Here is some good examples for instances where this approach will work well:

  • Numeric input ranges (e.g., age, weight)
  • Date ranges (e.g., date of birth, expiration dates)
  • String length validation (e.g., usernames, passwords)
  • Enumerated types (e.g., gender, country codes)
  • Monetary values (e.g., transaction amounts, loan amounts)
  • File uploads (e.g., file size, file type)
  • Inventory counts (e.g., stock quantities, order quantities)
  • Interest rates (e.g., loan rates, savings rates)
  • User permissions (e.g., access levels, subscription tiers)
  • Survey responses (e.g., rating scales, multiple-choice answers)

 

Black box testing technique: boundary value analysis

 

Boundary value analysis is an extension of equivalence partitioning testing, with a focus on the boundaries between equivalence classes. The core idea is that errors are more likely to occur at the edges of input ranges rather than in the middle, making boundary testing crucial. 

Let’s revisit the earlier example. 

Suppose you're testing a function that validates a user’s age for an online registration form, where the valid age range is 18 to 60. We can define the following equivalence classes:

  • Valid Equivalence Class: [18-60]
  • Invalid Equivalence Classes:
    • Ages less than 18: [-∞ to 17]
    • Ages greater than 60: [61 to ∞]
    • Non-numeric inputs: ["abc", "#$%", etc.]

Next, we'll identify the boundaries for numeric values:

  • Just below the lower boundary: 17
  • At the lower boundary: 18
  • Just above the lower boundary: 19
  • Just below the upper boundary: 59
  • At the upper boundary: 60
  • Just above the upper boundary: 61

 

Black box testing technique: decision table analysis

Decision Table Testing is a black-box testing technique that uses a decision table to map various input combinations to their corresponding outputs.

Key concepts of a decision table include:

  • Condition: Input variables that influence system behavior
  • Action: Outcomes based on combinations of conditions
  • Rule: A specific combination of conditions and their resulting actions

 

Rule-1

Rule-2

[...]

Rule-p

Conditions

 

 

 

 

Condition-1

 

 

 

 

Condition-2

 

 

 

 

...

 

 

 

 

Condition-m

 

 

 

 

Actions

 

 

 

 

Action-1

 

 

 

 

Action-2

 

 

 

 

...

 

 

 

 

Action-n

 

 

 

 

Here’s an example decision table for a simple loan approval system. The system evaluates loans based on two conditions:

  • The applicant's credit score
  • The applicant's income

Based on these conditions, the system determines whether to approve the loan and the applicable interest rate:

Rule

Rule-1

Rule-2

Rule-3

Rule-4

Rule-5

Rule-6

Conditions

Credit Score

High

High

Medium

Medium

Low

Low

Income

High

Low

High

Low

High

Low

Actions

Loan Approval

Yes

Yes

Yes

No

No

No

Interest Rate

Low

Medium

Medium

N/A

N/A

N/A

In this table, there are 2 conditions and 2 actions, resulting in 6 rules. For instance, if an applicant has a High Credit Score and High Income, their loan is approved with a Low Interest Rate. Conversely, an applicant with a Low Credit Score and High Income will have their loan denied, and therefore, the Interest Rate is N/A

In testing, each rule column corresponds to a test case, so there are 6 test cases to execute. 

A decision table is ideal for black-box testing because it consolidates all requirements into one clear format. You can also integrate equivalence class testing; for example, if a condition involves a range (like ages 18-60), you might test values at both the lower and upper limits. 

Decision table testing is particularly useful for systems implementing complex business rules that can be represented as combinations of conditions.

 

Black box testing technique: pairwise testing

 

Pairwise testing is a black-box testing technique that focuses on testing combinations of two inputs at a time. Instead of testing every possible combination, this method helps reduce the number of tests while still providing good coverage.

Let's look at a brief case study to illustrate the effectiveness of pairwise testing: a software company is developing a new e-commerce platform. The platform needs to be tested across different browsers, operating systems, payment methods, and user types, such as:

  • Browser: Chrome, Firefox, Safari, Edge
  • Operating System: Windows, macOS, Linux
  • Payment Method: Credit Card, PayPal, Bank Transfer
  • User Type: New User, Returning User, Guest

Testing all possible combinations would require:

4 (Browsers) x 3 (Operating Systems) x 3 (Payment Methods) x 3 (User Types) = 108 test cases

By using a pairwise testing tool (such as PICT or ACTS), you can generate a smaller set of test cases that ensure every pair of parameter values is tested at least once. This typically reduces the number of test cases to around 10-20, depending on the specific parameters and values.

 

Test Case Design Techniques For White-box System

1. Control Flow Testing

Control flow testing is a white-box testing technique that emphasizes creating and executing test cases to cover predetermined execution paths within the program's code.

While control flow testing provides a high level of thoroughness, it also presents some challenges:

  • Path Explosion: The number of possible paths can become immense. Modern applications are typically so complex that testing every possible control flow path exhaustively becomes practically impossible.
  • Implemented Paths Only: White-box testing only works with paths that have been implemented. If a particular path is missing, it won’t be detected. For instance, in the following code, there is no path for scores below 60:

    def calculate_grade(score): 

if score >= 90: grade = 'A' 

elif score >= 80: grade = 'B' 

elif score >= 70: grade = 'C' 

elif score >= 60: grade = 'D' 

# Missing implementation path for scores below 60 return grade

return grade

 
  • Logical Errors Despite Correct Flow: Even when the flow is correct, there can still be logic errors. In this example, the flow is correct, but the inventory should decrease (-1) rather than increase (+1) when an order is dispatched:

    def calculate_inventory(order_dispatched, current_inventory):
    if order_dispatched
        return current_inventory 1

 

To perform control flow testing, it’s important to understand the Control Flow Graph (CFG). A CFG visually represents all possible paths a program might take during execution and is widely used in control flow testing to analyze and understand the program’s structure.

A control flow graph consists of three main components:

  1. Node: Represents individual statements or blocks of code. For example, the following snippet has three nodes:

    int a = 0;  // Node 1
    if (b > 0) { // Node 2
    a = b;    // Node 3
    }
  • Edge: Represents the flow of control between nodes, indicating the program's execution path. There are two types of edges:
  • Unconditional edge: Direct flow from one statement to another.
  • Conditional edge: A branch based on conditions (e.g., True/False results from an if-statement). In the snippet above, there’s a conditional edge from the  if (b > 0) statement to a = b.
  • Entry/Exit Points: These are the start and end points of a program in the CFG.

Let' s take a look at this example:

void exampleFunction(int x, int y) {

    if (x > 0) {

        if (y > 0) {

            printf("Both x and y are positive.\n");

        } else {

            printf("x is positive, y is non-positive.\n");

        }

    } else {

        printf("x is non-positive.\n");

    }

}

 

Here we have:

7 nodes: 1 entry point, 1 exit point, 5 decision points.

  • 1: Entry point of exampleFunction.
  • 2: if (x > 0).
  • 3: if (y > 0).
  • 4: printf("Both x and y are positive.").
  • 5: printf("x is positive, y is non-positive.").
  • 6: printf("x is non-positive.").
  • 7: Exit point of exampleFunction.

Edges:

  • (1 -> 2)
  • (2 -> 3) if x > 0
  • (2 -> 6) if x <= 0
  • (3 -> 4) if y > 0
  • (3 -> 5) if y <= 0
  • (4 -> 7)
  • (5 -> 7)
  • (6 -> 7)

In this CFG, there are three possible execution paths to cover all outcomes.

 

Control Flow Graph
 

2. Structured Testing (Basic Path Testing)

Structured testing, also known as basic path testing, is a white-box testing technique that focuses on identifying and testing all independent paths within the software. The aim is to ensure that every possible execution path in a program is tested at least once.

A typical structured testing process follows these steps:

  1. Derive the control flow graph from the software module.
  2. Calculate the graph's Cyclomatic Complexity (C).
  3. Select a set of C basis paths.
  4. Create a test case for each basis path.
  5. Execute the test cases.

Cyclomatic Complexity is a software metric used to measure the complexity of a program's control flow. It was introduced by Thomas J. McCabe, Sr. in 1976 and serves as a key indicator of the number of linearly independent paths through a program's source code. The formula is:

 
C = Edges - Nodes + 2
 
 
Let's take a look at this Control Flow Graph:
Control Flow Graph

There are in total 8 edges and 7 nodes,  so the Cyclomatic Complexity would be:

 
C = 8 - 7 + 2 = 3

This means there are 3 linearly independent paths through the program.

Here's how you interpret the C metric:

  • C = 1-10: Simple program, low risk, easy to test and maintain.
  • C = 11-20: Moderate complexity, requiring more detailed testing and review.
  • C = 21-50: High complexity, increased risk of errors, requiring extensive testing and documentation.
  • C > 50: Very high complexity, difficult to test and maintain, likely in need of refactoring.

Based on the identified paths, we can create the following test cases:

Test Case 1:

  • Inputs: x = 5, y = 2
  • Expected Output: Both x and y are positive.

Test Case 2:

  • Inputs: x = 5, y = -2
  • Expected Output: x is positive, y is non-positive.

Test Case 3:

  • Inputs: x = -2
  • Expected Output: x is non-positive.

 

Conclusion

Test design techniques are the backbone of effective software testing. They help us go beyond just "checking the boxes" and ensure we’re really digging into the areas where bugs might hide. Whether it’s control flow, data flow, or path-based testing, these approaches give us the structure to create meaningful test cases that truly reflect how the software works.

banner5.png

Click