Modern applications are expected to be up 24/7, and that puts certain pressure on the performance testing team. One of their goals is to continuously monitor real users’ interactions with the site and develop a performance benchmark based on that information.
However, there’s a catch: sometimes real user monitoring doesn't allow you to observe the system's behavior in extreme scenarios (such as sudden spike in traffic or abnormal user behaviors). After all, those are extreme scenarios, and they don’t happen that often. That’s why synthetic monitoring is created to fill in the gap that real user monitoring can’t provide.
In this article, we will dive deep into the concept of synthetic monitoring, present the key components of this approach, as well as best practices you should follow to better implement it. You can even jump straight to our comprehensive guide on how to do synthetic monitoring.
Synthetic monitoring is a process where testers create automated scripts simulating real users’ interactions with the application or website to proactively detect performance issues and assess overall system health. These automated scripts are called synthetic transactions, and they help testers gauge the system’s performance when real user data is not yet available, or too challenging to capture.
Real user data doesn't always tell us the full picture for the software’s performance. It is admittedly great to have data captured directly from real users to provide actual user experience metrics, but what if the feature is still under development and not yet released for real users? How can you proactively capture the data to build a performance benchmark for it?
Even if the feature is already released, user activity tends to be quite limited in the early stage. Performance testing teams have no choice but to passively wait until enough data has been gathered so their test results can be statistically significant.
Similarly, to ensure the highest level of coverage, testers also need to look at more extreme cases, such as sudden traffic spikes or server outages, and you don’t usually have data for such outlier scenarios. To know how the system behaves under such pressures, testers can create synthetic transactions to mimic large volume users and send them to the website and document the results.
When it comes to dependencies, the need for synthetic monitoring becomes even more pronounced. There are so many variables at play that can mess with the final test result. Having a controlled environment exclusively built for the testing of that specific feature is a must if you want the test results to be more reliable. It also reduces the risk of unintentionally breaking other features.
In short, you need synthetic monitoring for two primary reasons:
There are so many other benefits of synthetic testing, not because of its synthetic nature, but rather because of its automated nature:
Have a look at the table below to better understand the differences between synthetic monitoring and real user monitoring:
Aspect | Synthetic Monitoring | Real User Monitoring (RUM) |
Data Source | Simulated transactions executed by monitoring tools | Data collected from actual user interactions with the application |
Proactive or Reactive | Proactive: Identifies issues before they impact real users | Reactive: Provides insights into actual user experiences |
Controlled Environment | Testing is performed in a controlled and predefined space to ensure higher test reliability | Reflects the diversity of real user interactions, but test results can be affected by external dependencies |
Consistency | Consistent and repeatable test scenarios | Data can vary based on user devices, locations, and network conditions |
Performance Benchmarking | Establishes performance benchmarks for the application | May not offer explicit performance benchmarks |
Custom Scenarios | Test scenarios can be customized to cover specific use cases | Scenarios are generated based on user behavior |
Maintenance and Updates | Requires periodic script updates to adapt to application changes | Continuously captures changes in user behavior and application performance |
Simulation Scope | Can simulate a wide range of user journeys and interactions | Represents actual user actions but may not cover all possible interactions |
Testing Scale | Scalable for load testing and simulating extreme conditions | Reflects the application's usage patterns, including both low and peak traffic |
Seasonal Variations | Can simulate load during specific periods but requires adjustment | Reflects seasonal and event-driven variations in user behavior |
Monitoring Complexity | Relatively simpler setup and less complex data collection | More complex data collection and analysis, considering user diversity |
User Segment Analysis | Limited ability to differentiate user segments | Provides insights into different user segments and behaviors |
Cost and Resource Usage | Typically more predictable and controllable costs | May incur variable costs based on user volume and data collection |
Have a look at some examples for synthetic monitoring below. You should notice a pattern: all of them have a fairly specific set of requirements that would be quite challenging to capture if you go with real user monitoring:
Before you start, make sure you choose the right approach:
Each tool comes with their own advantages that you should take into consideration. Writing your own test scripts grants a great degree of customization, but the downside is that you have to update all of them when updates are rolled out. Automation testing tools, on the other hand, often come with robust features that support you throughout the entire testing life cycle, but you need some initial investment to get access to the tool.
Scenario: In this example, we'll create a Selenium test script in JavaScript to automate the process of checking a website's homepage for its availability and response time.
Dependencies:
npm install selenium-webdriver
For ChromeDriver, also run the following command in your terminal:
npm install chromedriver
Test steps:
Test script:
The following script performs the test steps we mentioned above. Make sure to change the URL from https://www.example.com to the website you want to test.
const { Builder, By, Key, until } = require('selenium-webdriver');
const assert = require('assert');
(async function () {
// Set up the Selenium WebDriver
const driver = await new Builder().forBrowser('chrome').build();
try {
// Navigate to the website's homepage
await driver.get('https://www.example.com'); // Replace with the URL you want to monitor
// Measure the response time
const startTime = Date.now();
// Wait for the page to load (e.g., by checking for an element on the page)
await driver.wait(until.elementLocated(By.id('someElement')), 10000); // Adjust the element and timeout as needed
const endTime = Date.now();
const responseTime = endTime - startTime;
// Verify the page loads successfully (HTTP status code 200)
const statusCode = await driver.executeScript('return window.performance.timing.responseStart');
assert.equal(statusCode, 200, 'Expected HTTP status code 200');
// Log the response time
console.log(`Response time: ${responseTime} ms`);
} catch (error) {
console.error(`Test failed: ${error.message}`);
} finally {
// Close the browser
await driver.quit();
}
})();
That was just a basic example of how to use Selenium WebDriver to measure the response time of a web page and check if it loads successfully. Make sure that you understand the pros and cons of synthetic monitoring with Selenium:
Pros | Cons |
1. Selenium supports multiple programming languages, giving you the flexibility to write tests in a language you're comfortable with.
2. Features and integrations can be tailored to fit your team’s technical requirements, which is important when you want to build a fully customized tech stack. | 1. Minor changes to the website’s architecture can break the test, so you need a lot of maintenance for Selenium tests.
2. No existing frameworks dedicated for synthetic monitoring, and you need to build frameworks from scratch to support your testing activities on web, desktop, mobile, or APIs.
3. Setup needed for web servers, databases, and test environment integrations (e.g., local, CI, cloud environments). |
For automation testing tools, the process of synthetic monitoring is much more straightforward. You don’t have to script much thanks to all of the low-code test authoring features (such as Record-and-Playback, which essentially records your on-screen activities and turns that sequence into a code snippet that you can freely execute on environments of your choice.
Automation testing platforms take this to an even higher level when centralizing all stages of the testing life cycle, so that you can plan your synthetic monitoring, write scripts based on that plan, schedule them to be executed locally or on-cloud for web, desktop, mobile, and even API, then generate detailed reports to help you understand your testing effectiveness, all in one place.
Here’s how you can do synthetic monitoring with Katalon - a comprehensive software testing platform for web, desktop, API, mobile:
Step 1: Sign up and download Katalon Studio. This will primarily be where you create your test cases.
Download Katalon & Start Testing Within Minutes
Step 2: Once you have downloaded Katalon Studio, launch it. You should see the Katalon Studio interface as below. You can then click the “Create new Test Case” button.
Let’s call this test case “synthetic_monitoring”. After you name it and write some description, click OK.
You are now ready to create your test case. Katalon offers a wide variety of keywords (which are essentially automation code snippets for specific actions) that you can use to craft a full on test case.
Below is a snapshot of our keyword list—quite plenty for all of your synthetic monitoring needs, and there are hundreds to choose from. You can even create your own custom keywords or leverage the Record-and-Playback feature to record your on-screen actions and turn that sequence into a full script that can be executed across environments.
But it's not just about quickly creating test cases for web (and also desktop, mobile, or API), without having to write any code, but also about scheduling, executing, and viewing reports for them in one place. Here you can see how to schedule a specific test case and test suite to run on specific elements, at what time, and even what intervals.
After that, you can check the results in Katalon Analytics, with detailed information on passed/failed tests, before diving deep into the granularity.
Read More: Enabling Synthetic Monitoring with Katalon
The synthetic monitoring process involves creating automated test scripts that simulate user interactions with a web application or service. These scripts are executed at regular intervals from predefined locations or nodes, measuring performance metrics and identifying issues like latency, errors, and downtime.
Synthetic monitoring is also known as "active monitoring" or “proactive monitoring.”
Synthetic monitoring is a type of active monitoring. Active monitoring broadly refers to any monitoring method that involves actively generating traffic or probes to assess the performance and availability of a system. Synthetic monitoring specifically involves creating and running scripted tests to mimic user interactions.