To meet the ever-rising demand for quality, businesses must continuously update their apps. After such updates, the app must undergo smoke testing and sanity testing to validate its functionality. Although the definition of sanity testing vs smoke testing is quite similar, they should not be used interchangeably.
A good distinction between sanity testing vs smoke testing is the depth of the testing objectives. Sanity testing is about making sure that the core functions of the app works fine after a code change, while smoke testing verifies that the app works at its bare minimum or not. Smoke testing has a narrower scope than sanity testing.
Smoke testing |
Sanity testing |
Executed on initial/unstable builds |
Performed on stable builds |
Verifies the very basic features |
Verifies that the bugs have been fixed in the received build and no further issues are introduced |
Verify if the software works at all |
Verify several specific modules, or the modules impacted by code change |
Can be carried out by both testers and developers |
Carried out by testers |
A subset of acceptance testing |
A subset of regression testing |
Done when there is a new build |
Done after several changes have been made to the previous build |
In this article, we will deep-dive into the comparison between sanity testing vs smoke testing, their benefits, challenges, as well as processes to perform both.
To understand this example, we first need to understand the concept of a software build.
In software development, a build is a set of executable code ready either for testing or deployment. A software can contain thousands of source code files depending on its sophistication. When the project reaches a certain stage of development, these files are compiled into standalone applications that can be tested and shipped.
In case there are problems with the software build, teams will return to the source code to examine and modify them. Not every build can make it to the public release. Problems such as improperly installed environments, flaky networks, build scripts, and multiple other factors can render a build unqualified for deployment or even a complete failure. Thus, software build needs to go through rigorous testing to make sure it’s fully functional.
Smoke testing, also known as build verification testing or build acceptance testing, is a non-exhaustive testing type that ensures the proper function of basic and critical components of a single build. For example, smoke tests on Gmail only include testing the composing and sending email features - the core ones.
A more illustrative example would be to look at the name of this method: smoke. The term “smoke testing” actually predates electronics. It is believed to have originated from the plumbing industry, where plumbers would blow smoke into the water pipes to identify cracks in the pipe before fixing. The term is carried into the software testing world, and it has a similar meaning. In this sense, smoke testing is done to make sure that the very basic functionality of the product works, if at all. If those very basic features do not work, there is no point in continuing testing or building.
Performed in the initial phase of the software development life cycle (SDLC), its intent is to detect abnormalities of the application before teams move on to further exhaustive testing. This leaves little room for major errors to ripple through and become harder to fix down the line.
Commonly takes place after code reviews, if the initial build passes the smoke tests, it is qualified for further functional testing. If the smoke tests fail, the build is rejected and handed back to the development team.
The standard process is illustrated below.
To achieve the best results when executing smoke testing, we should follow these common practices:
Suppose you are a software tester working on an e-commerce website that has recently undergone some changes to its payment processing system. After the changes have been made, you need to ensure that the website is still working as expected.
In this case, you could perform smoke testing by quickly navigating to the site. If you realize that the website has crashed entirely, there is no point to continue testing. Instead, you alert the developers of this critical fault for them to troubleshoot immediately before you carry on.
Now let’s suppose that you have a desktop application to calculate tax, billing, and salary for enterprises. You import your data, and realize that the app somehow calculates 1 + 1 = 3, which is an unacceptable result for an app with calculator functionality. In this case, there is no point to test if the app can accurately calculate the Income Tax of 300 employees in the company, since it can’t even perform the most basic arithmetic calculations.
Smoke testing is simple, but it provides QA professionals with a valuable confirmation that the Application Under Test is stable enough to proceed with more detailed testing.
If the team only does manual testing, it will be time-consuming, much like manual regression testing. It is therefore recommended to use an automation tool to increase testing capacity and efficiency. A quality management platform like Katalon enables teams to test more scenarios in a short time.
Sanity testing, or surface level testing, is a software testing technique to ascertain that no issues arise in the new functionalities, code changes, and bug fixes. Instead of examining the entire system, it focuses on narrower areas of the added functionalities. It is performed on the stable build of the software.
To understand the concept of sanity testing, we can analyze its name: sanity. The term came from the concept of “sanity check”. The goal is also to verify that the build is stable enough for further testing, similar to smoke testing. If there are obviously app-breaking bugs, there is no point to continue testing. It’s much faster to send the build back to the development team immediately than testing a build that is guaranteed to fail, which is an extremely time-consuming effort.
The major difference between sanity testing and smoke testing is that sanity testing is fairly more complicated. Taking the example of the plumber above. If smoke testing is about blowing smoke into the pipe to identify cracks in the pipe (i.e. verifying if the pipe works at all) then sanity testing is about checking if the valve is tight enough, then turning on the water to see if it leaks out of the pipe after they have fixed the cracks (i.e. verifying if the critical features work fine, and if the bugs have been resolved)
It is a rare occurrence for a sanity test to fail. However, if it actually fails, testers save themselves from the unnecessary in-depth testing on a bug-ridden software. In a way, sanity testing is essentially “testing the water” before deep-diving in.
Instead of examining the entire system, sanity testing focuses on narrower areas of the added functionalities, while regression testing targets the application as a whole.
Sanity testing is executed before the production deployment and after the build is delivered with minor code changes, to fulfill the below objectives:
Now let’s suppose that we have a development project for a mobile banking application, which includes these features:
After the first build of the software, only the most basic features have been developed. In this case, those features should be Login, as no further actions can be taken if the user cannot access the main interface, and Withdraw money - the basic functionality of an ATM application.
In the 2nd build, a new feature is added - Changing PIN. To ensure that this new feature added does not interfere with the existing functionality, testers conduct sanity testing on the Login and Withdraw money features. If those 2 features work, they know that the Changing PIN feature did not affect them negatively.
Another example would be when testers discover a bug and send the build back to the development team for troubleshooting. After fixing, testers need sanity testing to confirm that the bug has indeed been fixed. It is more specific than smoke testing.
Regression testing is a software testing practice rather than a testing in itself. It contains multiple types of tests, with sanity tests being a checkpoint in the process to decide if the build can proceed with the next level of testing.
Basically, sanity testing works just as regression testing but deals with a smaller test suite. This subset contains critical test cases that are run first before the examination of the whole package of regression tests.
The relationship between sanity testing, smoke testing, and regression testing can be seen in the graph below.
Yes, regression testing is performed after executing sanity tests of any modified or added functionality. It ensures an application still works as expected after any code changes, updates, or improvements. It determines whether or not the software is eligible for further functional validation.
Smoke testing is executed first in the early stages of the SDLC to set out a foundation of bug-free and reliable core functionalities. Once passed smoke testing, the build moves onto sanity testing to ensure its stability and flawless integration with the existing features.
It is the best practice that the testing team start with smoke testing, followed by sanity and then regression tests, based on the project’s timeline and requirements.
Katalon Studio is a comprehensive end-to-end automation testing solution that surely will change the way you do regression testing. It is the all-in-one regression testing tool for your website, web services, desktop application, mobile application and even API testing. With Katalon, you improve test coverage for all aspects of your application.
Katalon's Record-and-Playback functionality, any team member can easily capture test objects and record actions to simulate real user activity. This sequence can be re-executed during regression testing, saving significant time compared to manual testing.
Additionally, it supports running scripts on a variety of devices, browsers, and testing environments, streamlining testing activities by consolidating them in one location, and eliminating the need for environment configuration and constant tool-switching.
Following test execution, teams can review the results with the aid of comprehensive and customizable test reports in LOG, HTML, CSV, and PDF formats, and forward them as email attachments. Testers can perform root cause analysis for specific failed cases in debugging mode.
Katalon integrates seamlessly into your CI/CD pipeline with the help of its diverse integration ecosystem. Katalon provides nearly all of the features necessary to start testing and bring value to your team at no cost.