SYS-CON MEDIA Authors: Pat Romanski, Gary Arora, Zakia Bouachraoui, Yeshim Deniz, Liz McMillan

Blog Feed Post

The Regression Testing Solution for DevOps

Software testing is an essential part of any software development process to make sure everything works as expected. This concept is nothing new. However, what is the protocol for testing an application that is already in production?

New features are requested, bugs are reported — your team gets to work putting together new code and finding fixes. The next question is, “How thoroughly do we need to test this new code?” Sure, if we want to be really cautious we can test the entire application from ground up, but isn’t that level of extreme caution time-consuming, expensive and unnecessary? The answer might surprise you.

What is Regression Testing?

We’ll come back to the answer in a moment. First, let’s look at how regression testing is defined.

Regression testing is a type of software testing which verifies that software which was previously developed and tested still performs the same way after it was changed or interfaced with other software. Changes may include software enhancements, patches, configuration changes, etc. Source: Wikipedia

While this definition is a start, it still doesn’t tell us everything we need to know. Regression testing is typically split up into two types of tests. They are:

  1. Tests that validate the functional use, and/or accurate processing of data, and
  2. Tests that track performance figures.

Let’s look at an example. Imagine that our product is an online calculator. It has been up and running for several months now, and the calculations are 100% accurate. However, we have recently made some performance enhancements to the code: the calculated results should now display on the website twice as fast as before.

This means we will have two sets of tests to run. Of course, we would need to test the calculator’s performance to make sure the new code really is making results display twice as fast. But even more importantly, we need to test the accuracy of the calculator, to make sure that the new code has not caused unexpected calculation errors.  After all, to a customer, data accuracy is always more important than performance.

In this example, the testing of the performance is not considered regression testing as it is a new feature. Yes, it needs to be tested — but because it is being tested for the first time, that testing process is not considered regression testing. However, since the calculations should yield the same answers as before the code change, any testing performed that validates calculation accuracy is considered regression testing.

Why is Regression Testing Important?

Anytime a developer makes changes or enhancements to code, they are careful to not accidentally break or disrupt other functions of the application. However, application code is complex, and enterprise level application code can increase in complexity exponentially. Hundreds, if not thousands, of interdependencies can be in play. Even with the utmost care, problems still occur. It’s simply part of coding reality.

Knowing this, development teams perform regression testing to check and re-check the various components of the application following the release of any new code. This confirms (in theory) that their core functionality, performance and interdependencies still work correctly. They do this because the headlines are littered with big and small companies who have suffered major setbacks due to software glitches. Many, if not all, of these issues, could have been avoided with better testing. Comprehensive regression testing of an application with every release, before it goes live, is essential to avoid becoming the next headline.

Where does Regression Testing fit in the Development Lifecycle?

Whether you’re running an Agile or DevOps development environment, regression testing is always performed following integration testing and before user testing and deployment to production. While that part is pretty cut and dry, the bigger question is who is responsible for doing it. While it has traditionally been done by a dedicated tester, depending on your organization, it could be performed by a developer or even an automated test tool. There are even automated test tools that use artificial intelligence to perform regression testing. Typically, however, regression testing is performed by a test engineer who creates and manages the test cases, ensuring that all modules are effectively tested and working according to user specifications.

The key thing to remember is that regression testing is a key part of the release process — every bit as important as the development itself. It ensures that the application is working correctly, and that customers and business teams don’t experience problems that can get your organization in the news for the wrong reasons.

Regression Testing - Stages of Development and Testing 300w, 768w" sizes="(max-width: 979px) 100vw, 979px" />

Effective Regression Testing in a Continuous Delivery DevOps Environment?

The stages of testing (shown in the above diagram) are identical regardless of whether your organization uses Agile or DevOps. The key difference between them arguably comes down to minor changes in the development lifecycle, and who is responsible for the various stages of testing.

With the Waterfall methodology the various testing stages were clear cut and well defined by different individual and team roles. Developers developed, testers tested and managers managed. However, with the continuous delivery and continuous testing model of Agile, those well-defined lines have become blurred. As companies try to improve development life-cycle efficiency, the responsibilities associated with each stage have shifted left.

Regression Testing - DevOps Cycle 522w" sizes="(max-width: 330px) 100vw, 330px" />In practice, this mean that developers were now taking on a significant amount of the testing to expedite issue resolution and code release. Then organizations began implementing DevOps with the goal of improving efficiency even more, which blurred the lines of finite roles even further and the responsibilities shifted even further to the left in the cycle. This meant that developers would take on more testing, and testers would take on more development. In some cases, even tasks that would traditionally have been the responsibility of operations would now be performed by testers and developers — interacting directly with the customer, identifying issues, and then quickly resolving them.

This is as far as we’ll go in this discussion comparing these two methodologies. However, even with these improved efficiencies and shortened timelines, the stages of testing haven’t changed and are just as important as ever. The only way to develop quality software as quickly and efficiently as possible is to provide efficient, effective regression testing.

Scheduling Environments for Regression Testing

Application testing in a large enterprise needs to be performed in a wide variety of environments that mimic the production environment as closely as possible. There are many ways to do this. One of the more popular methods is called Service Virtualization. In simple terms, this creates a virtual environment that simulates the production environment. This allows developers and testers to test their new code in an environment as close to live production as possible.

While that is the simplified explanation of Service Virtualization, the reality is that a global enterprise can have thousands of artifacts that make up a single virtual test environment. This means that to effectively mimic the production environment, everything needs to be considered — hardware, firmware, software, code versions, networks, and much more. When you factor in the needed hardware, software, resources, and licensing needed for every instance, regardless of production or test, it becomes financially unrealistic to expect a dedicated license for every test group. That means that these licenses have to be coordinated, scheduled and reserved for different developers and testers throughout the organization.

Without tools like Environments from Plutora, the coordination and scheduling of these environments is no small feat. Multiple development and test teams, thousands of different artifacts, and all of their various interdependencies —without the proper tools this can quickly become completely unmanageable.

Creating a Test Plan for Regression Testing

A quality Agile or DevOps program needs organization and structure. The same goes for the testing effort regardless of who is specifically assigned to run the tests. To make sure the testing is effective and thorough, a test plan needs to be created. A good test plan has multiple functions. They are:

Consistency – When performing regression testing, it’s important that the entire application is tested thoroughly.  One aspect of this is to ensure that every part of the application is tested. The other aspect is to make sure that the different parts of the application are tested effectively, using the same tests, test variables, and produce the same test results. Keeping and following a test plan like this will not only improve the speed in which the tests can be performed, but also ensure the accuracy of the tests, and thus the quality of the application.

Show Testing Coverage – It’s almost impossible to show what parts of an application will be tested, or have been tested, without having a test plan to point to. It’s like a coloring book, showing an outline of what needs to be tested. As each test is performed, one section at a time is colored in. This makes it easy to see the status of the regression testing at any time, and if necessary, where to pick-up and continue the tests.

Continuity – A good test plan should be completely transferable from one person to another. It should include every detail on how to exactly recreate the necessary test environment, set up the test scenario, perform the test, define data inputs, and identify what the results should be. This level of documentation will ensure continuity from one test series to another — and from one Test Engineer to another in case of reassignment.

Speed – A good test plan significantly improves the speed and efficiency in which a regression test can be performed. This is because all the necessary tests and their respective details are laid out in a well thought out structure that is easy to read, duplicate and execute.

Audit Trail – Enterprise organizations deal with high stakes. At some point, it will be necessary to have an audit trail to show what was done, by who, and when — whether to respond to a trivial query or to comply with a major investigation. A good test plan will show exactly that information.

Accountability – Each test or test section should have space for the person who is performing the tests to record their name and date. This assigns an owner to each section who states “I have tested this application according to these recorded tests, and approve of it going into production.” When accountability is assigned in this way, that person takes a greater level of pride and responsibility in their work.

Maintaining an Effective Test Plan

Just as the code for an enterprise application is constantly changing and improving, effective regression testing on that application must also keep up with and adapt to the changes of the application. The regression test plans need to be maintained, not only to reflect new changes in the application code, but also to become iteratively more effective, thorough, and efficient. A Test Plan should be considered a living document.

Regression Testing Tools and Solutions

Automated Test
There are a variety of automated software testing tools to consider. They each have various claims of superiority. So, depending on your organization’s application, interface, network or methodology, there is more than likely an automated test solution waiting for you. For example, if you were needing to regularly perform regression tests on an online interface, Selenium, which specializes in web browser automation, would be a good solution to consider.

Test Management
Understanding what will be tested, won’t be tested, or has already been tested, can be challenging enough within a single test team. But to maintain visibility across an entire global enterprise that spans dozens of development teams is a challenge on a completely different scale. Whether Planning, executing manual, or automated test plans and scripts, Plutora Test takes care of it. Test interfaces directly with thousands of testing tools (like the above-mentioned Selenium) to both simplify and improve the efficiency of the overall execution and management of the testing process.

Environment Management
For regression testing to be accurately performed on new application code, it needs to be tested in an environment that accurately mimics the production environment as closely as possible. For a global enterprise environment, this means mimicking thousands, if not millions of customers, and a dizzying network of servers, applications, firmware versions… a daunting task to say the least. These environments then need to be provisioned time and time again to meet the needs of the various development teams. It’s in this type of scenario that Plutora Environments shines. It effectively manages not only the environments but each of the thousands of different artifacts that comprise them. This allows you to quickly, reliably and repeatedly provision test environments that mimic every detail of the production environments, but without the risk.

The post The Regression Testing Solution for DevOps appeared first on Plutora.

Read the original blog entry...

More Stories By Plutora Blog

Plutora provides Enterprise Release and Test Environment Management SaaS solutions aligning process, technology, and information to solve release orchestration challenges for the enterprise.

Plutora’s SaaS solution enables organizations to model release management and test environment management activities as a bridge between agile project teams and an enterprise’s ITSM initiatives. Using Plutora, you can orchestrate parallel releases from several independent DevOps groups all while giving your executives as well as change management specialists insight into overall risk.

Supporting the largest releases for the largest organizations throughout North America, EMEA, and Asia Pacific, Plutora provides proof that large companies can adopt DevOps while managing the risks that come with wider adoption of self-service and agile software development in the enterprise. Aligning process, technology, and information to solve increasingly complex release orchestration challenges, this Gartner “Cool Vendor in IT DevOps” upgrades the enterprise release management from spreadsheets, meetings, and email to an integrated dashboard giving release managers insight and control over large software releases.

Latest Stories
Every organization is facing their own Digital Transformation as they attempt to stay ahead of the competition, or worse, just keep up. Each new opportunity, whether embracing machine learning, IoT, or a cloud migration, seems to bring new development, deployment, and management models. The results are more diverse and federated computing models than any time in our history.
On-premise or off, you have powerful tools available to maximize the value of your infrastructure and you demand more visibility and operational control. Fortunately, data center management tools keep a vigil on memory contestation, power, thermal consumption, server health, and utilization, allowing better control no matter your cloud's shape. In this session, learn how Intel software tools enable real-time monitoring and precise management to lower operational costs and optimize infrastructure...
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions.
Concerns about security, downtime and latency, budgets, and general unfamiliarity with cloud technologies continue to create hesitation for many organizations that truly need to be developing a cloud strategy. Hybrid cloud solutions are helping to elevate those concerns by enabling the combination or orchestration of two or more platforms, including on-premise infrastructure, private clouds and/or third-party, public cloud services. This gives organizations more comfort to begin their digital tr...
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. But real cloud success, at scale, requires much more than a basic lift-and-shift migrati...
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, low-cost, and reliable cloud storage. Prior to Wasabi, David co-founded Carbonite, one of the world's leading cloud backup companies. A successful tech entrepreneur for more than 30 years, David got his start at ARP Instruments, a manufacturer of synthesizers for rock bands, where he worked with leading musicians of the day like Stevie Wonder, Pete Townsend of The Who, and Led Zeppelin. David has ...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Addteq is a leader in providing business solutions to Enterprise clients. Addteq has been in the business for more than 10 years. Through the use of DevOps automation, Addteq strives on creating innovative solutions to solve business processes. Clients depend on Addteq to modernize the software delivery process by providing Atlassian solutions, create custom add-ons, conduct training, offer hosting, perform DevOps services, and provide overall support services.