When testing debuted in the software development world, it was quickly relegated to a position at the end of the long and expensive waterfall cycle.
This afterthought status belied the importance of testing in the software development life cycle, and it was also risky. If things went awry in the testing stage, everything could fall apart, resulting in delivery delays and costly, labor-intensive repairs.
Fortunately, the way developers approach testing has evolved over time to align with new tools and technologies. Most recently, testing has adapted to continuous integration and delivery (CI/CD) pipelines. Now, instead of a risky one-time testing stage, continuous and automated testing enhances troubleshooting ability and limits user impacts.
Read on to learn about the top models of the past and what future models could look like based on changing areas of focus and new tools and processes.
When the waterfall model ruled software development life cycles (SDLCs), testing could only be done after the final development stage was completed. As a result, the testing phase was often rushed. By then, the errors it uncovered could be costly to repair.
One of the early testing models developed to address these issues was the V-model — named for the V-shaped diagram outlining its development and testing steps. It is also sometimes referred to as the validation or verification model.
The V-model introduced a proactive element into the life cycle. It engaged testers in every delivery process step by pairing each stage with a corresponding testing design activity.
At the top of the V, the development team begins gathering requirements, while the testing team plans for and designs acceptance testing. During the system design phase, the testers create a system testing plan — and so on, following each stage of software development until coding is complete.
Then, the previously designed tests can be completed in reverse order, beginning with unit testing and working back up the left side of the V, all the way to acceptance testing.
The V-model paired well with the waterfall methodology, and although it earned a reputation for being expensive and slow, it hasn’t entirely disappeared.
Some industries require an extremely high degree of rigor in testing, or they develop products that must be tested in sophisticated live lab environments. These conditions still require months-long development cycles to release products that meet quality standards.
It can be easy to think of the V-model as the source of delays in waterfall-era development life cycles. However, the pace of development and testing was often dictated by the processes and technology available at the time.
As technology and development cycles sped up, the waterfall methodology gave way to more iterative processes and, eventually, to the agile methods widely used today.
Just as the V-model evolved to adapt to the needs of waterfall development, a new testing model arose as an answer to accelerated development timelines. The philosophy of the testing model shifted as well. Instead of redefining the role of testing in the SDLC, the new model — the test pyramid — served as a strategic metaphor to outline the volume, type and order of testing that would best optimize for speed, effort and cost.
The “test pyramid” was coined by author Mike Cohn in his 2009 book Succeeding with Agile, which visually represents a three-part testing strategy.
Unit testing serves as the widest foundation layer, and services or integration testing makes up the middle layer, leaving UI or end-to-end testing for the top layer.
Most testing is done at the unit level, where both developers and testers can break down larger functions into smaller pieces to validate and test as they build. In the middle stage, testers validate functions that work together, as well as APIs and services that enable end-user functionality. Both of these stages are ideal for automated testing.
At the point of UI testing, the third and final stage, all components have already been individually tested. As a result, less end-to-end testing is required, and testers can focus on the broader user experience rather than feature function and other more granular elements.
Testing models continue to evolve as the nature of applications changes.
In 2018, the engineering team at Spotify outlined their own model that they felt better captured the testing needs of a microservice-based architecture.
The honeycomb model draws inspiration from the original pyramid design but makes key adjustments: expanding the center section for integration testing and shrinking both the unit (or implementation detail) and UI (or integrated) testing sections.
This new shape reflects a system architecture that focuses on APIs and has fewer and smaller individual units to test. Spotify’s model has gained traction as more organizations move toward a cloud infrastructure similarly based on APIs and service integrations. These shifts have led to higher volumes and the greater importance of integration-focused validations relative to the other two testing areas.
The rise of microservices and other architecture-focused discussions is making waves in the testing model conversation. But advancements like artificial intelligence (AI) are also likely to affect testing models moving forward.
Future testing models are likely to include a combination of manual testing, traditionally automated testing and testing activities driven by AI. Any AI elements will also require varying degrees of human supervision and instruction, which should be included in future models.
While testing models themselves may morph over time in various ways to reflect the evolving needs of the industry, it’s important to remember that, at its foundation, any testing model is a visual aid that illustrates a testing philosophy. No model can dictate how testing actually happens.
Visual models may be most useful for generating consensus among development teams on an approach to software testing. Once leadership has agreed on a testing plan, they are better able to hire the right team and equip them with the tools needed to accomplish the goal.
Companies are already pushing the envelope on continuous delivery, moving toward a model of detecting and fixing errors in production — and not because those errors were missed in earlier testing stages. Rather, the tech industry continues to enhance methods for managing issues within the product life cycle.
No matter what other changes are on the horizon, you can be certain the evolution of testing will continue to mirror the evolution of software development and the ever-rising standard of quality.
Learn how Katalon can help you approach and accelerate DevOps testing with a demo of our TestOps platform.