The Katalon Blog

What is End-to-End Testing? Definition, Tools, Best Practices

Written by Katalon Team | Jun 10, 2019 3:56:00 AM

E2E testing is that final piece of a puzzle. Every tests have passed, now it's time to bring everything together in one single test to see if your application holds up as a singular entity. One test (suite) to rule them all!

In this article, we'll explore the concept of E2E testing, also known as end-to-end testing in great depth.


What is End-to-End Testing?

End-to-end testing (E2E testing) checks the system's behavior as a comprehensive whole. It's done after integration testing and before user acceptance testing to catch system-wide issues early.

When To Do E2E Tests?

What sets end-to-end apart from other testing type is its scope. To understand this, let's look at the test pyramid.

A comprehensive test strategy should include 3 layers of tests:

  1. Unit tests (base of the pyramid): These tests focus on individual components, like functions or methods, ensuring that each small piece of the system works correctly in isolation. Unit tests are simple, fast, and cover the most ground in a test suite. Developers typically write them while coding or shortly after, as they’re easy to automate and provide quick feedback.
  2. Integration tests (middle layer): Once individual components work, it’s time to check if they function properly together. Individual components that passed unit tests can totally break when they are integrated, usually due to data miscommunication. Integration tests ensure that data flows correctly between modules and that interfaces are solid. They offer a good mix of speed and coverage but need more setup than unit tests.
  3. E2E tests/system tests (top of the pyramid): End-to-end tests validate the entire application, from the user interface to the back-end, ensuring the system works as a whole. These tests give high confidence in meeting business requirements, but they’re slower and more complex. Because of this, they should be limited to critical workflows to avoid flakiness.

As you can see, while unit testing is only concerned with ‘Is this specific module working well individually’, integration testing is concerned with ‘Are these modules working well together?’, the goal of end-to-end tests is much more comprehensive: is everything working well together?

In a good test strategy, end-to-end tests should only account for 5-10% of the total number of tests. This figure is around 15-20% for integration tests and 70-80% for unit tests.

 

Benefits of End-to-End Testing

End-to-end testing has been more reliable and widely adopted because of the following benefits:

  • Quality management across multiple application levels: modern applications are built upon complex architecture consisting of multiple layers with interconnected workflow. These layers may work fine individually but conflict with each other once connected. E2E testing can verify the interactions between these individual layers and components.
  • Backend QA: E2E testing first verifies the backend layers, especially the database of the application, which feeds critical information to other layers for the application to work. 
  • Ensure consistent application quality across environments: E2E testing verifies the frontend, ensuring that the application works as intended across a wide range of browsers, devices, and platforms. Cross-browser testing is frequently performed for this purpose. 
  • Third-party application testingthere are external systems integrated into the application to perform highly specific tasks. End to end testing ensures the compatibility between external and internal systems, as well as the data communication between them.

 

End-to-End Testing vs Integration Testing

End to end testing is quite similar to integration testing, and indeed they do overlap in several aspects. Here are some major differences between them:

Aspect

Integration Testing

End-to-End Testing

Perspective

Technical team’s point of view

Final user’s point of view

Goal

Ensure that application components work together

Ensure that the User Experience is consistent

Scope

Multiple components within in one applications

The scope may span across the entire technology stack of the application

Cost

Less expensive to implement

More expensive to implement due to the hardware - software needed to best simulate real-world scenarios

Time needed

Faster than E2E testing (about less than 1 hour for 200 tests)

Longer than Integration testing (may take up to 4 - 8 hours)

Read More: End-to-end Testing vs. Integration Testing: A Detailed Comparison

A quick example: let's say you have an application with the UI, API, and the backend. The goal of E2E testing is to check the entire flow from the UI to the backend to the database, while the goal of integration testing is only to check the flow from the UI to the backend, or from the backend to the database. In other words, the scope of integration testing is more limited than the scope of end-to-end testing.

Why run both? Integration tests are quicker to execute and provide faster, more targeted feedback. When they fail, it’s easier to pinpoint the issue since it’s isolated to specific components, excluding UI concerns. Test layers are designed to systematically narrow down potential problems in your codebase or configuration. In some cases, E2E tests can simulate machine-to-machine interactions, where the “user” is another system rather than a human, ensuring seamless communication between systems.

 

End-to-end Testing vs Functional Testing

End-to-end tests are not just several unit tests and functional tests strung together – they are more complex and carry more risks. We’ve listed the main differences between functional and E2E tests to illustrate this further.

Aspect

Functional Tests

End-to-End Tests

Scope

Testing is limited to one single piece of code or application.

Testing crosses multiple applications and user groups.

Goal

Ensures the tested software meets acceptance criteria.

Ensures a process continues to work after changes are made.

Test Method

Tests the way a single user engages with the application.

Tests the way multiple users work across applications.

What To Validate

Validate the result of each test for inputs and outputs.

Validate that each step in the process is completed.

 

Types of End-to-End Testing

1. Horizontal E2E test

Horizontal end to end testing is what the end user thinks of when they hear “end-to-end testing”. In this approach, testers focus on ensuring that each individual workflow in the application works correctly. 

The eCommerce example above is exactly what horizontal E2E testing looks like from the end user’s perspective. The tests to be performed involve verifying the Product Search feature, the Product Page functionalities, Product Ordering steps, and Shipping details. 

In other words, with horizontal end-to-end testing, QA professionals want to put themselves in the position of the user and test how they experience every customer-facing aspect of the application. 

Read More: Manual Testing: A Comprehensive Guide

 

2. Vertical E2E test

Vertical E2E testing is a more technical approach. This is what the tester thinks when they hear end-to-end testing. Unlike end users who only experience the frontend, testers also work with the backend, and they want to make sure that the right data is transferred to the right place, at the right time, so that all background processes keeping the application running can be smoothly executed. 

In the eCommerce website context, vertical E2E testing involves verifying the Account Registration & Verification process, account-related features, the product database, product updates, and eventually the UI (frontend). These vertical E2E tests happen in sequential, hierarchical order.

 

End-to-End Testing Success Metrics

Some of the many testing metrics used for E2E testing include:

1. Test Case Preparation Status

This metric tracks the percentage of test cases that have been prepared relative to the total number of planned test cases. It provides insights into the readiness of test cases before execution begins.

Formula:

Preparation Progress = (Number of Test Cases Prepared / Total Planned Test Cases) × 100

 

2. Test Progress Tracking

Metrics:

  • Execution Progress: Measures the percentage of test cases executed relative to the total planned.
  • Pass Percentage: Indicates the percentage of executed test cases that passed.
  • Fail Percentage: Indicates the percentage of executed test cases that failed, providing insights into areas needing improvement.

Formulas: 

Execution Progress = (Number of Executed Test Cases / Total Planned Test Cases) × 100

Pass Percentage = (Number of Executed Test Cases / Number of Passed Test Cases) × 100 

Fail Percentage = (Number of Failed Test Cases / Number of Executed Test Cases) × 100

 

3. Defects Status and Details

Metrics:

  • Open Defect Percentage: Measures the percentage of defects that are still unresolved.
  • Closed Defect Percentage: Measures the percentage of defects that have been successfully resolved.
  • Defect Distribution: Analyzes defects based on severity (critical, major, minor) and priority (high, medium, low), helping prioritize fixes.

Formulas: 

Open Defect Percentage = (Number of Open DefectsTotal Defects) × 100

Open Defect Percentage = (Total Defects / Number of Open Defects) × 100

Closed Defect Percentage = (Total Defects / Number of Closed Defects) × 100

 

4. Environment Availability

It measures the availability of the testing environment compared to the scheduled operational hours. It ensures sufficient testing time and resource availability.

Formula: 

Environment Uptime = (Actual Operational Hours / Scheduled Hours) × 100

 

End-to-End Testing Main Challenges

Detecting bugs in complex workflows certainly has its challenges.

1. End-to-end testing can be time-consuming

Creating the necessary test suites and matching them to the user’s navigation within the application may require the running of thousands of tests and can be quite time-consuming. The time required comes from replicating full workflows, setting up test data, handling dependencies across services, and ensuring coverage of all possible edge cases. 

Potential solutions:

  • Automation tools: Utilizing automated testing tools can significantly speed up the process. These tools allow you to automate these user interactions, but they require maintenance as the system evolves.
  • Parallel Testing: Running tests in parallel across different environments or machines can reduce execution time.
  • Test Case Prioritization: Prioritize testing critical user flows to optimize time.

2. Setting up the right test environment

Accessing a proper test environment is not always straightforward and can include having to install local agents and logging into virtual machines. To reduce the difficulties, it is better to adopt an on-cloud testing system which gives testers access to a wider range of platforms and environments to execute their tests on, saving the initial investment into physical machines.

Potential solutions:

  • Cloud-based Testing: Cloud platforms like provide pre-configured environments for testing on various devices, operating systems, and browsers, reducing the need to manually configure complex environments.
  • Containerization: Using Docker and Kubernetes to create lightweight, isolated environments for testing, which can be quickly spun up and torn down as needed.
  • Mock Services: When it's difficult to set up real systems, mocking some services (like APIs) can allow you to test parts of your system without the full environment.

 

3. Test Data Management

Complex workflows require specific data sets at different stages of the application. Managing and resetting the state of this data between tests can be difficult.

  • Potential solutions:
    • Automated Data Setup: Automatically generating the necessary test data before running the tests.
    • Database Snapshots: Using database snapshots to quickly reset the test environment to a known state after each test.
    • Data Virtualization: This technique allows for using virtual data models that can simulate real data without needing to replicate it physically in the test environment.

End-to-End Testing Best Practices

1. Focus on your application’s critical workflows first

It's important to prioritize critical parts of the application. Focus on what most people use in your application and create tests for that in advance. 

After selecting the most critical workflows, start by breaking them down into smaller steps. This gives you a more focused perspective into how your tests should be carried out and minimizes the number of unrelated tests. 

2. Avoid exception testing

Exception testing is the process of testing the behavior or system when it encounters an error condition or exceptional event. Although it is a highly recommended practice, end to end testing is not suitable for exception testing. This type of test may identify that an error occurred but does not necessarily tell us why the error happened or how it will affect the application or system. 

3. Build End-to-End tests that minimize UI flakiness

When it comes to end-to-end tests, UI tests are usually involved, and these types of test regularly fail due to the flaky nature of interacting with the system from the perspective of the end user. Network issues, server slowness, and many other aspects can affect the test results and generate false positives. 

To deal with such flakiness, it is recommended to account for unexpected system issues while performing your tests. For example, with Katalon, testers can utilize the Smart Wait feature to wait until all elements on the screen have been loaded before continuing the predefined actions.

4. Leverage automation testing

Unlike other types of testing, end-to-end testing involves testing the application's functionality from end-to-end, covering all possible user scenarios, interactions, and interfaces. It requires a higher level of coordination and collaboration between teams, as it involves testing the integration of various modules, APIs, and systems.     
 

For end to end testing, we need more than just test automation tools. We need a software quality management platform. Such a platform provides a comprehensive test management system that allows testers to manage and organize test cases, requirements, test plans, and defects in a single place.

 

 

End-to-End Test Automation with Katalon Studio

Katalon is an excellent software quality management platform that can make your end-to-end tests less complicated. With Katalon, software teams can have an end-to-end perspective of their testing activities, from collaboration for test planning, test authoring, test execution, test artifact management, to test analytics and reporting. 

 

Demo speaks louder than words. Here's a sneak peek of how Katalon works:

 

Katalon supports a wide range of testing types, and both developers and manual testers can easily automate UI or functional testing for web, API, desktop, and mobile applications, all in 1 place. Tools used across the software testing life cycle can natively integrate with Katalon, giving you a comprehensive testing experience. (your existing CI/CD pipeline and test management tools).

 

Katalon has several noticeable features to support your E2E testing activities:

1. On-cloud Test Execution

Performing end-to-end testing across multiple browsers and operating systems can be a daunting task, especially when it comes to setting up physical machines to execute tests. This process can be time-consuming and resource-intensive, leading to delays in the end-to-end testing process.

 

By leveraging Katalon, you can easily run tests on the cloud, on a wide range of multiple browsers, devices, and operating systems simultaneously, ensuring comprehensive test coverage. This feature helps organizations save time and resources while improving the quality of their web applications.

 

Read More: Katalon TestCloud is The New and Better Way To Test Across Environments

 

2. Katalon Recorder

Different use cases are often mixed together in different orders and variations. But we can call each grouping of use cases a user journey. Technically, a user journey is a collection of steps in which each step corresponds to a user action. Collectively, they represent a typical user session.

Katalon offers the Recorder feature on the web to help you accomplish the task of creating user journeys without any hassle. The Recorder essentially watches and records all your movements on the application so that you can focus on creating the journey itself.

 

3. Built-in keywords

A user journey usually consists of hundreds of steps. When a test case – which represents a user journey – fails, it may be easy to pinpoint superficial causes (e.g. a change in a Web element’s property). However, it is harder to diagnose the actual cause solely based on the fact that the test has failed. It could have been because the current page is not the same page that was recorded because at some points the test case went off the right track.

To ensure that the user journey has to go through certain milestones, test oracles are necessary. Test oracle refers to a mechanism to ensure that the test case is being executed as expected. In Katalon, a set of built-in keywords that implement this concept is provided. You can assert or verify a web element against certain text that you expect it to contain, or its properties against some expected properties and many other types of expectations.

 

4. Custom keywords

Depending on the business logic of your application, there may be behaviors that occur across different pages but differ only in some known characteristics. In such cases, it is a best practice to capture these behaviors in a behavior template that can be filled with specific information when necessary.

In Katalon, a custom keyword can be defined to represent such a behavior template. Once defined, it can be reused in different test cases and even in different test projects. You can even share your custom keyword as a plug-in through Katalon Store so that others can benefit from it.