What is Integration Testing? Definition, How-to, Examples
Integration testing is a type of software testing where components of the software are gradually integrated and then tested as a unified group. Usually these components are already working well individually, but they may break when integrated with other components. With integration testing, testers want to find defects that surface due to code conflicts between software modules when they are integrated with each other.
Conflicts between software modules happen for many reasons, such as incompatibility between subsystem versions, data format conflict, or different processing logics. Integration testing pinpoints those communication issues between software components. It usually occurs after unit testing and before system testing.
Why is Integration Testing Important?
Software is often built from many individual software components or modules. These modules may pass unit testing and work perfectly fine individually, yet collapse when put together, for various reasons:
- Inconsistent code logic: They are coded by different programmers whose logic and approach to development differ from each other, so when integrated, the modules cause functional or usability issues. Integration testing ensures that the code behind these components is aligned, resulting in a working application.
- Shifting requirements: Clients change their requirements frequently. Modifying the code of 1 module to adapt to new requirements sometimes means changing its code logic entirely, which affects the entire application. These changes are not always reflected in unit testing, hence the need for integration testing to uncover the missing defects.
- Erroneous Data: Data can change when transferred across modules. If not properly formatted when transferring, the data can’t be read and processed, resulting in bugs. Integration testing is required to pinpoint where the issue lies for troubleshooting.
- Third-party services and API integrations: Since data can change when transferred, API and third-party services may receive false input and generate false responses. Integration testing ensures that these integrations can communicate well with each other.
- Inadequate exception handling: Developers usually account for exceptions in their code, but sometimes they can’t fully see all of the exception scenarios until the modules are pieced together. Integration testing allows them to recognize those missing exception scenarios and make revisions.
- External Hardware Interfaces: Bugs can also arise when there is software-hardware incompatibility, which can easily be found with proper integration testing.
Integration testing protects you from those downfalls. There are numerous benefits to this practice:
- A software is built from a complex network of subcomponents. With integration testing, we ensure that they work together in harmony, delivering the intended functionalities.
- Modern software must also connect with a wide variety of external systems via APIs to deliver specific functionalities, and conflicts can also happen with them, requiring integration testing
- Verify the flow of data and information throughout the entire system
- Increased confidence in system behavior especially if it is performed early in the testing life cycle, following the shift left testing approach.
Types Of Integration Testing
There are several strategies to perform integration testing, each of which has its own advantages and disadvantages, with the 2 most common approaches being:
- Big Bang Approach
- Incremental Approach
Incremental Approach is carried out by 3 different methods:
- Bottom-up approach
- Top-down approach
- Sandwich approach
Big Bang Integration Testing Explained
Big Bang Integration testing is an integration testing approach in which all modules are integrated and tested at once, as a singular entity. The integration process is not carried out until all components have been successfully unit tested.
- Suitable for simple and small-sized system with low level of dependency among software components
- Little to no planning beforehand required
- Easy to set up since all modules are integrated simultaneously
- Management and coordination efforts are minimized since there is only one major testing phase
- Costly and time-consuming for large systems with a huge number of units as testers have to wait until all modules have been developed to start testing
- Waiting for all modules to be developed before testing also means late defect detection
- Hard to isolate and pinpoint bugs in specific modules
- Hard to debug due to the complexity of multiple integrated modules
Best Practices when using Big Bang testing:
- Clearly define the interactions between each units/functions before testing to minimize missing defects
- Perform extensive logging for more accurate fault localization
- Perform Big Bang testing for simple applications
Incremental Integration Testing Explained
Incremental integration testing is an approach in which 2 or more modules with closely related logic and functionality are grouped and tested first, then gradually move on to other groups of modules, instead of testing everything at once. The process ends when all modules have been integrated and tested. Incremental integration testing is more strategic than Big Bang testing, requiring substantial planning beforehand.
- Earlier defect detection compared to Big Bang testing since the modules are integrated and tested as soon as they are developed. QA teams don't have to wait until all modules are available to begin testing.
- Easier fault localization since the modules are tested in relatively small groups.
- The strategic nature of incremental integration testing can be leveraged in project management. For example, QA managers can choose which module to test first based on urgency, priority, or resource availability.
- The risk of encountering catastrophic failures is also significantly reduced since issues are addressed early on from the root.
- Require thorough planning beforehand
- The total number of tests to perform can be huge depending on the scale of the project, requiring significant organizational resources
- Coordinating a large integration testing project with this approach may be complex
- Require a complete definition and logic of the system before it can be broken down into small units
- In earlier stages of the project, certain system functionalities may not yet be available, leading to a dependence on stubs and drivers (which are essentially mock components that will be used as substitutes for actual components)
- The lack of system functionalities in earlier stages, if not carefully documented, may even lead to system “blindspots” later down the road
Incremental integration testing can be further divided into 3 smaller approaches, each also comes with its own advantages and disadvantages that QA teams need to carefully consider for their projects. These approaches are named based on the level of impact of the software components being integrated have on the overall system, including:
- Bottom-up approach: perform testing for low-level components first, then gradually move to higher-level components.
- Top-down approach: perform testing for high-level components first, then gradually move to lower-level components.
- Hybrid approach: combining the two former approach
To better understand these 3 concepts, we must first define low-level components and high-level components.
Low-level Component vs High-level Component
The concept of “low” vs “high” here refers to the position of the software component in the system hierarchy when performing integration testing.
When referring to "low-level”, we are talking about the very basic building blocks of the software, performing the most fundamental and basic tasks in the system. They are basic data structures or simple functions that perform a specific and low-impact task in the software.
On the other hand, when referring to “high-level”, we are talking about the most comprehensive and complex components of the system, representing the complete system behavior. They tend to perform more abstract and far-reaching tasks such as data processing, data management, or tasks involving complex encryption and security mechanisms. If these high-level modules are buggy, the entire system is put under risk.
You can have a look at the table below to better understand the differences between the two types of components, and examples for them in the context of an eCommerce website:
Focused on specific tasks
Smaller and modular
Larger and more integrated
Bottom-up Approach vs. Top-down Approach
With the bottom-up approach, testers start with individual modules at the lowest level, then gradually moving to higher level modules, hence the term “bottom-up”. The rationale for this approach is that the entire system can only work reliably if these basic building blocks work reliably.
Let's use an analogy to understand the concept better. A “bottom-up” approach is essentially going from the more specific and granular components to more general and comprehensive components.
Imagine software components as types of clothing.
The lower level components are shirts and polo (specific types of clothes). Shirts and polos fall under the “Tops” category, which falls under the “Men's clothing” category. At the highest and most generic level, they are simply called “clothing”.
When using the bottom-up approach for integration testing, we go from the specific modules to more comprehensive modules, or from the “shirt” module to the “clothing” module.
With the top-down approach, testers start with the highest-level modules, then gradually move to lower level modules, hence the term “top-down”.
For example, the diagram above illustrates the modules required for a ride-hailing application:
Module A: User Authentication
Module B: Ride Booking
Stub DI: Driver Information
Module P: Payment Processing
Module PC: Payment-by-cash/ PC
Stub DP: Debit Card/Credit Card Payment aka DP (Yet to be developed)
Stub EP: E-Payment/ EP (Yet to be developed)
Module T: Ride Tracking
These 2 approaches inherit all of the advantages and disadvantages of incremental integration testing. The major differences between them lie in the specific situations and scenarios that they are used.
The final decision to choose which type also depends on the project's characteristics, development approach, and testing requirements.
We should use bottom-up integration testing when:
- Complexity is primarily found in lower-level modules
- The team follows incremental development (which involves developing lower-level components first before moving to higher-level modules)
- Defect localization is a crucial aspect of the project, since the granularity of the bottom-up approach provides more exact bug isolation
- Higher-level modules are still under development or likely to change frequently
Similarly, we should use top-down integration testing when:
- The critical functionalities are primarily concentrated in the higher-level components
- When it's crucial to simulate real-world scenarios and user interactions, top-down testing provides a more holistic view of the system's behavior early on.
- Lower-level modules are more well-defined, stable, and unlikely to change significantly
- Top-down testing allows for faster validation of user-facing features, enabling early prototypes for user feedback.
Note that when choosing the bottom-up testing approach, we may need to build a stub to substitute for high-level modules that are not yet developed, while for the top-down testing approach, we may need to build a driver to replace the unavailable low-level modules.
Stubs and drivers are essentially a simplified version of the actual software component so that the testing team can proceed without having to wait for the real dependent components.
Hybrid Approach (Sandwich Approach)
Sandwich Testing (also known as Hybrid Integration Testing) is an approach in which testers employ both top-down and bottom-up testing simultaneously.
- QA teams can tailor their integration testing activities based on project requirements, combining the strengths of different methods.
- More flexibility in terms of using resources
- Ensuring that both the low-level and high-level software components are verified at the same time
- Complex and require careful planning and coordination to decide which modules to test using each method
- Effective communication among team members is crucial to ensure consistency and proper tracking of issues
- Teams may find it challenging to switch between different integration strategies
Test Cases For Integration Testing
In simple terms, integration testing is about trying to see how 2 or more things work together. We can perform it to see how 2 service classes interact with each other, how a service interacts with a datastore, how the UI responds to the backend.
In the case of an eCommerce website, there are several integration test cases to consider:
- Verifying the interface link between the Get Started page and the Login page. Once a user enters their credentials, it is expected that the system will check if their credentials are correct, and if they are, the user will be directed to the Get Started page as a logged in user
- Verifying that when the user inputs their information into a form, the right data is sent to the right place, in the right format
- Verify that changes to the user profile are reflected in the order history and confirmation emails.
- Verify that products are marked as out of stock when their inventory reaches zero and that they become available again when restocked.
- Verify that shipping information is accurately provided to users, and tracking updates are correctly displayed.
- Verify that the selected products are correctly added to the shopping cart and that the final order is successfully processed, updating inventory, and generating an order confirmation.
In the case of a banking application, these are the common integration testing cases:
- Verifying if the balance of the user is deducted by the correct amount that they sent, and if the balance of the recipient increased by the correct amount, after accounting for fees.
- Verify that transactions (e.g., deposits, withdrawals) accurately update the account balances and reflect in the transaction history.
- Verify that account statements are generated correctly and display the transaction history and interest earned.
- Verify that data consistency is maintained, and transactions do not lead to unexpected conflicts or errors.
- Assuming that the bank has a bonus point system. Integration testing can verify if they have increased bonus points after a certain activity is recorded.
Integration Testing Best Practices
To have a holistic integration test strategy covering all aspects of the application, it is crucial for testers to align with the developers and clients on their vision, and optimize their strategy along the way. No matter what, it is still essential to follow integration testing best practices:
- Ensure that all modules have been unit tested before moving to the integration phase, so that the integration test results are reliable
- Develop a comprehensive test plan and strategy that outlines the scope, objectives, test cases, and resources required for integration testing.
- Automate repetitive and complex test cases to improve testing efficiency and maintain test consistency across different test runs.
- Validate the input test data for higher test reliability
- Perform regression testing after each integration to ensure that new changes do not impact previously integrated components adversely.
Manual Integration Testing vs Automated Testing
Integration testing can be conducted manually by QA teams, usually in parallel with the development process.
The process usually starts with the development team committing new code to the shared repository, then a series of unit tests are triggered to validate the new code. After that, QA testers start to select different components based on the test plan to manually inspect and interact with, seeing if any defects arise.
However, performing integration testing manually is not always the best choice.
Automated integration testing, either with the help of automation testing frameworks like Selenium or automation testing tools, can significantly speed up the process and free up resources for other activities.
Unit Testing vs Integration Testing: Key Differences
In simple terms, unit testing focuses on testing individual units in isolation, while integration testing is about putting those units together to test as a whole. There are also several differences between them in other aspects, such as purpose, dependencies, and execution environment. You have a look at the table below to see the comparison between the two testing types:
Focuses on testing individual units of code (functions or methods).
Focuses on testing interactions between multiple units or modules.
Verifies that each unit of code works as intended in isolation.
Verifies that different units or modules work together correctly.
Mocks or stubs external dependencies to isolate the unit being tested.
Uses real dependencies, as it tests the integration between units.
Tests specific functionality within a small code unit.
Tests how different units collaborate with each other.
Can be executed in a development environment or Continuous Integration (CI) system.
Requires a more elaborate environment to host multiple units working together.
Typically uses small datasets or mocked data to isolate the unit.
May use larger datasets to mimic real-world scenarios for interactions.
Generally faster as it focuses on smaller pieces of code.
Can be slower due to the need to set up and tear down multiple units.
Helps to identify and isolate bugs within individual code units.
Helps to identify bugs related to how units interact with each other.
Provides higher coverage for individual units or functions.
Provides higher coverage for the interactions between units.
Requires updates only when the unit's behavior changes.
May need updates when the interaction between units changes.
Can be run as part of a developer's workflow.
Typically conducted after unit tests and before end-to-end tests as part of integration pipelines.
Importance in CI/CD
Essential for Continuous Integration (CI) to ensure small units work as expected.
Critical for Continuous Deployment (CD) to ensure all parts work together.
Integration Testing vs End-to-end Testing
Simply put, integration testing verifies interactions between individual components or modules, whereas end-to-end testing verifies the flow of a complete user journey or business process.
Validates system behavior in real-world scenarios
Validates integration between components
Broader in scope and cover the entire technology stack of the application
Interaction between different components/modules
More expensive as it often requires more resources, including personnel, equipment, and testing environments.
Less expensive than end-to-end testing
Performed at the end of the software development lifecycle before releases
After unit testing and before end-to-end testing
Black-box testing, often uses User Acceptance Testing (UAT)
White-box testing, often uses API testing
Read More: End-to-end testing vs Integration Testing
Integration testing is an essential part of software testing as it validates if the modules can communicate well with each other, which is something that can’t be tested with unit testing. We can take either the Big Bang approach or the Incremental approach to integration testing, both with their own advantages and disadvantages.
Performing integration testing typically involves API and UI Integration Testing. However, professionals may not be able to perform those tests in one place, and have to constantly switch from 1 tool to another when they need to perform different testing types.
Katalon for Integration Testing
The Katalon Platform is a powerful All-in-One platform to help QA teams plan and design test cases in 1 place without having to use any additional tools. Katalon also supports a wide range of application types, including API Testing and UI Testing to cover all of your needs.
Increased Test Coverage
A major pain of testers performing UI Testing is the huge number of devices and browsers to test the application on. The UI of an application may look perfectly fine on this device, but is messy on another due to differences in screen sizes, resolution, or other technical specifications. Katalon allows you to stay within budget thanks to its cross-platform, cross-browser testing capabilities on cloud environments.
Reduce Test Maintenance Effort
Katalon Platform is also built for test maintenance. When an application is updated, testers need to review if their test cases need to be adapted for the code changes. This is a daunting task that Katalon Platform can help you with thanks to its page object design model approach. Katalon stores locators across tests in an object repository. When changes occur, locators and artifacts are already grouped together to make updates easily.