What is Integration Testing? Definition, How-to, Examples
All of your unit tests passed with green results! Congrats. That means all of your software modules are working well (individually).
What can possibly happen when those smoothly-running modules are integrated together? Well, things that run well individually can totally break when merged. That's exactly when integration testing comes to the rescue.
In this article, we'll explore the concept of integration testing in-depth, top integration testing tools, best practices, and challenges you should be wary of.
What is Integration Testing?
Integration testing is a type of testing where software components are incrementally integrated and then tested.
The idea of integration testing is to take already-validated parts (by unit testing) and combine them to ensure that the interaction between them works as expected. It examines interfaces, communication protocols, and data flows between the different modules, ensuring the system does not fall apart when the parts are combined.
It can be a bit like fine-tuning an orchestra. Each instrument (or code part) needs to play its notes correctly, but even more importantly, they need to harmonize with the others. The goal is to avoid that terrible moment where the trumpet blasts off-key or the drums come in late.
Why is Integration Testing Important?
Software is often built from many individual software components or modules. Issues between those modules can always happen for many reasons:
- Inconsistent code logic: They are coded by different programmers whose logic and approach to development differ from each other, so when integrated, the modules cause functional or usability issues. Integration testing ensures that the code behind these components is aligned, resulting in a working application.
- Shifting requirements: Clients change their requirements frequently. Modifying the code of 1 module to adapt to new requirements sometimes means changing its code logic entirely, which affects the entire application. These changes are not always reflected in unit testing, hence the need for integration testing to uncover the missing defects.
- Erroneous Data: Data can change when transferred across modules. If not properly formatted when transferring, the data can’t be read and processed, resulting in bugs. Integration testing is required to pinpoint where the issue lies for troubleshooting.
- Third-party services and API integrations: Since data can change when transferred, API and third-party services may receive false input and generate false responses. Integration testing ensures that these integrations can communicate well with each other.
- Inadequate exception handling: Developers usually account for exceptions in their code, but sometimes they can’t fully see all of the exception scenarios until the modules are pieced together. Integration testing allows them to recognize those missing exception scenarios and make revisions.
- External Hardware Interfaces: Bugs can also arise when there is software-hardware incompatibility, which can easily be found with proper integration testing.
All of these scenarios can be prevented with good integration testing processes in place.
Integration Testing vs End-to-end Testing
Simply put, integration testing verifies interactions between individual components or modules, whereas end-to-end testing verifies the flow of an entire user journey. The scope of end-to-end testing is bigger than that of integration testing.
| End-to-End Testing | Integration Testing |
Purpose | Validates system behavior in real-world scenarios | Validates integration between components |
Scope | Broader in scope and covers the entire technology stack of the application | Interaction between different components/modules |
Cost | More expensive as it often requires more resources, including personnel, equipment, and testing environments. | Less expensive than end-to-end testing |
Testing stage | Performed at the end of the software development lifecycle before releases | After unit testing and before end-to-end testing |
Technique | Black-box testing, often uses User Acceptance Testing (UAT) | White-box testing, often uses API testing
|
Read More: End-to-end testing vs Integration Testing
Unit Testing vs Integration Testing: Key Differences
In simple terms, unit testing focuses on testing individual units in isolation, while integration testing is about putting those units together to test as a whole. There are also several differences between them in other aspects, such as purpose, dependencies, and execution environment. You have a look at the table below to see the comparison between the two testing types:
Aspect | Unit Testing | Integration Testing |
Scope | Focuses on testing individual units of code (functions or methods). | Focuses on testing interactions between multiple units or modules. |
Purpose | Verifies that each unit of code works as intended in isolation. | Verifies that different units or modules work together correctly. |
Dependencies | Mocks or stubs external dependencies to isolate the unit being tested. | Uses real dependencies, as it tests the integration between units. |
Granularity | Tests specific functionality within a small code unit. | Tests how different units collaborate with each other. |
Execution Environment | Can be executed in a development environment or Continuous Integration (CI) system. | Requires a more elaborate environment to host multiple units working together. |
Test Data | Typically uses small datasets or mocked data to isolate the unit. | May use larger datasets to mimic real-world scenarios for interactions. |
Test Speed | Generally faster as it focuses on smaller pieces of code. | Can be slower due to the need to set up and tear down multiple units. |
Bug Identification | Helps to identify and isolate bugs within individual code units. | Helps to identify bugs related to how units interact with each other. |
Test Coverage | Provides higher coverage for individual units or functions. | Provides higher coverage for the interactions between units. |
Test Maintenance | Requires updates only when the unit's behavior changes. | May need updates when the interaction between units changes. |
Deployment | Can be run as part of a developer's workflow. | Typically conducted after unit tests and before end-to-end tests as part of integration pipelines. |
Importance in CI/CD | Essential for Continuous Integration (CI) to ensure small units work as expected. | Critical for Continuous Deployment (CD) to ensure all parts work together. |
Read More: Unit Testing vs Integration Testing: In-depth Comparison
Types Of Integration Testing
There are several strategies to perform integration testing, each of which has its own advantages and disadvantages, with the 2 most common approaches being:
- Big Bang Approach
- Incremental Approach
1. Big Bang Integration Testing
Big Bang Integration testing is an integration testing approach in which all modules are integrated and tested at once, as a singular entity. It is essentially “testing in a Big Bang fashion”.
The Big Bang integration testing process is not carried out until all components have been successfully unit tested.
Advantages:
- Suitable for simple and small-sized systems with low level of dependency among software components
- Little to no planning beforehand required
- Easy to set up since all modules are integrated simultaneously
- Management and coordination efforts are minimized since there is only one major testing phase
Disadvantages:
- Costly and time-consuming for large systems with a huge number of units as testers have to wait until all modules have been developed to start testing
- Waiting for all modules to be developed before testing also means late defect detection
- Hard to isolate and pinpoint bugs in specific modules
- Hard to debug due to the complexity of multiple integrated modules
Best Practices when using Big Bang testing:
- Clearly define the interactions between each unit/function before testing to minimize missing defects
- Perform extensive logging for more accurate fault localization
- Perform Big Bang testing for simple applications
2. Incremental Integration Testing
Incremental integration testing is an approach in which 2 or more modules with closely related logic and functionality are grouped and tested first, then gradually move on to other groups of modules, instead of testing everything at once. The process ends when all modules have been integrated and tested.
Incremental integration testing is more strategic than Big Bang testing. It requires substantial planning beforehand.
Advantages:
- Earlier defect detection compared to Big Bang testing since the modules are integrated and tested as soon as they are developed. QA teams don't have to wait until all modules are available to begin testing.
- Easier fault localization since the modules are tested in relatively small groups.
- The strategic nature of incremental integration testing can be leveraged in project management. For example, QA managers can choose which module to test first based on urgency, priority, or resource availability.
- The risk of encountering catastrophic failures is also significantly reduced since issues are addressed early on from the root.
Disadvantages:
- In earlier stages of the project, certain system functionalities may not yet be available, leading to a dependence on stubs and drivers (which are essentially mock components that will be used as substitutes for actual components).
- The total number of tests to perform can be huge depending on the scale of the project, requiring significant organizational resources
- Coordinating a large integration testing project with this approach may be complex
- Require a complete definition and logic of the system before it can be broken down into small units
- The lack of system functionalities in earlier stages, if not carefully documented, may even lead to system “blindspots” later down the road
Approaches of Incremental Integration Testing
Incremental integration testing can be further divided into 3 smaller approaches, each also comes with its own advantages and disadvantages that QA teams need to carefully consider for their projects. These approaches are named based on the level of impact of the software components being integrated have on the overall system, including:
- Bottom-up approach: perform testing for low-level components first, then gradually move to higher-level components.
- Top-down approach: perform testing for high-level components first, then gradually move to lower-level components.
- Hybrid approach: combining the two former approaches
To better understand these 3 concepts, we must first define low-level components and high-level components.
The concept of “low” vs “high” here refers to the position of the software component in the system hierarchy when performing integration testing.
When referring to low-level component, we are talking about the very basic building blocks of the software, performing the most fundamental and basic tasks in the system. They are basic data structures or simple functions that perform a specific and low-impact task in the software.
When referring to high-level component, we are talking about the most comprehensive and complex components of the system, representing the complete system behavior. They tend to perform more abstract and far-reaching tasks such as data processing, data management, or tasks involving complex encryption and security mechanisms. If these high-level modules are buggy, the entire system is put at risk.
You can have a look at the table below to better understand the differences between the two types of components, and examples for them in the context of an eCommerce website:
Aspect | Low-Level Modules | High-Level Modules |
Complexity | Simple functionalities | Complex, multi-functional |
Scope | Focused on specific tasks | Comprehensive functionalities |
Granularity | Smaller and modular | Larger and more integrated |
Examples |
|
|
1. Bottom-up Approach For Incremental Integration Testing
With the bottom-up approach, testers start with individual modules at the lowest level, then gradually move to higher-level modules, hence the term “bottom-up”. The rationale for this approach is that the entire system can only work reliably if these basic building blocks work reliably.
Let's use an analogy to understand the concept better. A “bottom-up” approach is essentially going from the more specific and granular components to more general and comprehensive components.
Imagine software components as types of clothing.
The lower level components are shirts and polo (specific types of clothes). Shirts and polos fall under the “Tops” category, which falls under the “Men's clothing” category. At the highest and most generic level, they are simply called “clothing”.
When using the bottom-up approach for integration testing, we go from the specific modules to more comprehensive modules, or from the “shirt” module to the “clothing” module.
We should use bottom-up integration testing when:
- Complexity is primarily found in lower-level modules
- The team follows incremental development (which involves developing lower-level components first before moving to higher-level modules)
- Defect localization is a crucial aspect of the project since the granularity of the bottom-up approach provides more exact bug isolation
- Higher-level modules are still under development or likely to change frequently
2. Top-down Approach For Incremental Integration Testing
With the top-down approach, testers start with the highest-level modules, then gradually move to lower-level modules, hence the term “top-down”.
For example, the diagram above illustrates the modules required for a ride-hailing application:
Module A: User Authentication
Module B: Ride Booking
Stub DI: Driver Information
Module P: Payment Processing
Module PC: Payment-by-cash/ PC
Stub DP: Debit Card/Credit Card Payment aka DP (Yet to be developed)
Stub EP: E-Payment/ EP (Yet to be developed)
Module T: Ride Tracking
These 2 approaches inherit all of the advantages and disadvantages of incremental integration testing. The major differences between them lie in the specific situations and scenarios in which they are used.
The final decision to choose which type also depends on the project's characteristics, development approach, and testing requirements.
Similarly, we should use top-down integration testing when:
- The critical functionalities are primarily concentrated in the higher-level components
- When it's crucial to simulate real-world scenarios and user interactions, top-down testing provides a more holistic view of the system's behavior early on.
- Lower-level modules are more well-defined, stable, and unlikely to change significantly
- Top-down testing allows for faster validation of user-facing features, enabling early prototypes for user feedback.
Note that when choosing the bottom-up testing approach, we may need to build a stub to substitute for high-level modules that are not yet developed, while for the top-down testing approach, we may need to build a driver to replace the unavailable low-level modules.
Stubs and drivers are essentially a simplified version of the actual software component so that the testing team can proceed without having to wait for the real dependent components.
3. Hybrid Approach For Incremental Integration Testing
Sandwich Testing (also known as Hybrid Integration Testing) is an approach in which testers employ both top-down and bottom-up testing simultaneously.
Advantages:
- QA teams can tailor their integration testing activities based on project requirements, combining the strengths of different methods.
- More flexibility in terms of using resources
- Ensuring that both the low-level and high-level software components are verified at the same time
Disadvantages:
- Complex and requires careful planning and coordination to decide which modules to test using each method
- Effective communication among team members is crucial to ensure consistency and proper tracking of issues
- Teams may find it challenging to switch between different integration strategies
Integration Testing Best Practices
To have a holistic integration test strategy covering all aspects of the application, it is crucial for testers to align with the developers and clients on their vision, and optimize their strategy along the way. No matter what, it is still essential to follow integration testing best practices:
- Ensure that all modules have been unit tested before moving to the integration phase so that the integration test results are reliable
- Develop a comprehensive test plan and strategy that outlines the scope, objectives, test cases, and resources required for integration testing.
- Automate repetitive and complex test cases to improve testing efficiency and maintain test consistency across different test runs. Learn how to do automation testing the best way here.
- Validate the input test data for higher test reliability
- Perform regression testing after each integration to ensure that new changes do not impact previously integrated components adversely.
Key Takeaway
Integration testing is an essential part of software testing as it validates if the modules can communicate well with each other, which is something that can’t be tested with unit testing. We can take either the Big Bang approach or the Incremental approach to integration testing, both with their own advantages and disadvantages.
Performing integration testing typically involves API and UI Integration Testing. However, professionals may not be able to perform those tests in one place, and have to constantly switch from 1 tool to another when they need to perform different testing types.
Katalon for Integration Testing
The Katalon Platform is a powerful All-in-One platform to help QA teams plan and design test cases in 1 place without having to use any additional tools. Katalon also supports a wide range of application types, including API Testing and UI Testing to cover all of your needs.
1. Increased Test Coverage
A major pain of testers performing UI Testing is the huge number of devices and browsers to test the application on. The UI of an application may look perfectly fine on this device but is messy on another due to differences in screen sizes, resolution, or other technical specifications. Katalon allows you to stay within budget thanks to its cross-platform, cross-browser testing capabilities on cloud environments.
2. Reduce Test Maintenance Effort
Katalon Platform is also built for test maintenance. When an application is updated, testers need to review if their test cases need to be adapted for the code changes. This is a daunting task that Katalon Platform can help you with thanks to its page object design model approach.
Katalon stores locators across tests in an object repository. When changes occur, locators and artifacts are already grouped together to make updates easily.