E2E testing is that final piece of a puzzle. Every tests have passed, now it's time to bring everything together in one single test to see if your application holds up as a singular entity. One test (suite) to rule them all!
In this article, we'll explore the concept of E2E testing, also known as end-to-end testing in great depth.
End-to-end testing has been more reliable and widely adopted because of the following benefits:
End to end testing is quite similar to integration testing, and indeed they do overlap in several aspects. Here are some major differences between them:
Aspect | Integration Testing | End-to-End Testing |
Perspective | Technical team’s point of view | Final user’s point of view |
Goal | Ensure that application components work together | Ensure that the User Experience is consistent |
Scope | Multiple components within in one applications | The scope may span across the entire technology stack of the application |
Cost | Less expensive to implement | More expensive to implement due to the hardware - software needed to best simulate real-world scenarios |
Time needed | Faster than E2E testing (about less than 1 hour for 200 tests) | Longer than Integration testing (may take up to 4 - 8 hours) |
Read More: End-to-end Testing vs. Integration Testing: A Detailed Comparison
A quick example: let's say you have an application with the UI, API, and the backend. The goal of E2E testing is to check the entire flow from the UI to the backend to the database, while the goal of integration testing is only to check the flow from the UI to the backend, or from the backend to the database. In other words, the scope of integration testing is more limited than the scope of end-to-end testing.
Why run both? Integration tests are quicker to execute and provide faster, more targeted feedback. When they fail, it’s easier to pinpoint the issue since it’s isolated to specific components, excluding UI concerns. Test layers are designed to systematically narrow down potential problems in your codebase or configuration. In some cases, E2E tests can simulate machine-to-machine interactions, where the “user” is another system rather than a human, ensuring seamless communication between systems.
End-to-end tests are not just several unit tests and functional tests strung together – they are more complex and carry more risks. We’ve listed the main differences between functional and E2E tests to illustrate this further.
Aspect | Functional Tests | End-to-End Tests |
Scope | Testing is limited to one single piece of code or application. | Testing crosses multiple applications and user groups. |
Goal | Ensures the tested software meets acceptance criteria. | Ensures a process continues to work after changes are made. |
Test Method | Tests the way a single user engages with the application. | Tests the way multiple users work across applications. |
What To Validate | Validate the result of each test for inputs and outputs. | Validate that each step in the process is completed. |
Horizontal end to end testing is what the end user thinks of when they hear “end-to-end testing”. In this approach, testers focus on ensuring that each individual workflow in the application works correctly.
The eCommerce example above is exactly what horizontal E2E testing looks like from the end user’s perspective. The tests to be performed involve verifying the Product Search feature, the Product Page functionalities, Product Ordering steps, and Shipping details.
In other words, with horizontal end-to-end testing, QA professionals want to put themselves in the position of the user and test how they experience every customer-facing aspect of the application.
Read More: Manual Testing: A Comprehensive Guide
Vertical E2E testing is a more technical approach. This is what the tester thinks when they hear end-to-end testing. Unlike end users who only experience the frontend, testers also work with the backend, and they want to make sure that the right data is transferred to the right place, at the right time, so that all background processes keeping the application running can be smoothly executed.
In the eCommerce website context, vertical E2E testing involves verifying the Account Registration & Verification process, account-related features, the product database, product updates, and eventually the UI (frontend). These vertical E2E tests happen in sequential, hierarchical order.
Some of the many testing metrics used for E2E testing include:
This metric tracks the percentage of test cases that have been prepared relative to the total number of planned test cases. It provides insights into the readiness of test cases before execution begins.
Formula:
Preparation Progress = (Number of Test Cases Prepared / Total Planned Test Cases) × 100
Metrics:
Formulas:
Execution Progress = (Number of Executed Test Cases / Total Planned Test Cases) × 100
Pass Percentage = (Number of Executed Test Cases / Number of Passed Test Cases) × 100
Fail Percentage = (Number of Failed Test Cases / Number of Executed Test Cases) × 100
Metrics:
Formulas:
Open Defect Percentage = (Number of Open DefectsTotal Defects) × 100
Open Defect Percentage = (Total Defects / Number of Open Defects) × 100
Closed Defect Percentage = (Total Defects / Number of Closed Defects) × 100
It measures the availability of the testing environment compared to the scheduled operational hours. It ensures sufficient testing time and resource availability.
Formula:
Environment Uptime = (Actual Operational Hours / Scheduled Hours) × 100
Detecting bugs in complex workflows certainly has its challenges.
Creating the necessary test suites and matching them to the user’s navigation within the application may require the running of thousands of tests and can be quite time-consuming. The time required comes from replicating full workflows, setting up test data, handling dependencies across services, and ensuring coverage of all possible edge cases.
Potential solutions:
Accessing a proper test environment is not always straightforward and can include having to install local agents and logging into virtual machines. To reduce the difficulties, it is better to adopt an on-cloud testing system which gives testers access to a wider range of platforms and environments to execute their tests on, saving the initial investment into physical machines.
Potential solutions:
Complex workflows require specific data sets at different stages of the application. Managing and resetting the state of this data between tests can be difficult.
It's important to prioritize critical parts of the application. Focus on what most people use in your application and create tests for that in advance.
After selecting the most critical workflows, start by breaking them down into smaller steps. This gives you a more focused perspective into how your tests should be carried out and minimizes the number of unrelated tests.
Exception testing is the process of testing the behavior or system when it encounters an error condition or exceptional event. Although it is a highly recommended practice, end to end testing is not suitable for exception testing. This type of test may identify that an error occurred but does not necessarily tell us why the error happened or how it will affect the application or system.
When it comes to end-to-end tests, UI tests are usually involved, and these types of test regularly fail due to the flaky nature of interacting with the system from the perspective of the end user. Network issues, server slowness, and many other aspects can affect the test results and generate false positives.
To deal with such flakiness, it is recommended to account for unexpected system issues while performing your tests. For example, with Katalon, testers can utilize the Smart Wait feature to wait until all elements on the screen have been loaded before continuing the predefined actions.
Unlike other types of testing, end-to-end testing involves testing the application's functionality from end-to-end, covering all possible user scenarios, interactions, and interfaces. It requires a higher level of coordination and collaboration between teams, as it involves testing the integration of various modules, APIs, and systems.
For end to end testing, we need more than just test automation tools. We need a software quality management platform. Such a platform provides a comprehensive test management system that allows testers to manage and organize test cases, requirements, test plans, and defects in a single place.
Katalon is an excellent software quality management platform that can make your end-to-end tests less complicated. With Katalon, software teams can have an end-to-end perspective of their testing activities, from collaboration for test planning, test authoring, test execution, test artifact management, to test analytics and reporting.
Demo speaks louder than words. Here's a sneak peek of how Katalon works:
Katalon supports a wide range of testing types, and both developers and manual testers can easily automate UI or functional testing for web, API, desktop, and mobile applications, all in 1 place. Tools used across the software testing life cycle can natively integrate with Katalon, giving you a comprehensive testing experience. (your existing CI/CD pipeline and test management tools).
Katalon has several noticeable features to support your E2E testing activities:
Performing end-to-end testing across multiple browsers and operating systems can be a daunting task, especially when it comes to setting up physical machines to execute tests. This process can be time-consuming and resource-intensive, leading to delays in the end-to-end testing process.
By leveraging Katalon, you can easily run tests on the cloud, on a wide range of multiple browsers, devices, and operating systems simultaneously, ensuring comprehensive test coverage. This feature helps organizations save time and resources while improving the quality of their web applications.
Read More: Katalon TestCloud is The New and Better Way To Test Across Environments
Different use cases are often mixed together in different orders and variations. But we can call each grouping of use cases a user journey. Technically, a user journey is a collection of steps in which each step corresponds to a user action. Collectively, they represent a typical user session.
Katalon offers the Recorder feature on the web to help you accomplish the task of creating user journeys without any hassle. The Recorder essentially watches and records all your movements on the application so that you can focus on creating the journey itself.
A user journey usually consists of hundreds of steps. When a test case – which represents a user journey – fails, it may be easy to pinpoint superficial causes (e.g. a change in a Web element’s property). However, it is harder to diagnose the actual cause solely based on the fact that the test has failed. It could have been because the current page is not the same page that was recorded because at some points the test case went off the right track.
To ensure that the user journey has to go through certain milestones, test oracles are necessary. Test oracle refers to a mechanism to ensure that the test case is being executed as expected. In Katalon, a set of built-in keywords that implement this concept is provided. You can assert or verify a web element against certain text that you expect it to contain, or its properties against some expected properties and many other types of expectations.
Depending on the business logic of your application, there may be behaviors that occur across different pages but differ only in some known characteristics. In such cases, it is a best practice to capture these behaviors in a behavior template that can be filled with specific information when necessary.
In Katalon, a custom keyword can be defined to represent such a behavior template. Once defined, it can be reused in different test cases and even in different test projects. You can even share your custom keyword as a plug-in through Katalon Store so that others can benefit from it.