SYS-CON MEDIA Authors: Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz, Courtney Abud

Blog Feed Post

The Regression Testing Solution for DevOps

Software testing is an essential part of any software development process to make sure everything works as expected. This concept is nothing new. However, what is the protocol for testing an application that is already in production?

New features are requested, bugs are reported — your team gets to work putting together new code and finding fixes. The next question is, “How thoroughly do we need to test this new code?” Sure, if we want to be really cautious we can test the entire application from ground up, but isn’t that level of extreme caution time-consuming, expensive and unnecessary? The answer might surprise you.

What is Regression Testing?

We’ll come back to the answer in a moment. First, let’s look at how regression testing is defined.

Regression testing is a type of software testing which verifies that software which was previously developed and tested still performs the same way after it was changed or interfaced with other software. Changes may include software enhancements, patches, configuration changes, etc. Source: Wikipedia

While this definition is a start, it still doesn’t tell us everything we need to know. Regression testing is typically split up into two types of tests. They are:

  1. Tests that validate the functional use, and/or accurate processing of data, and
  2. Tests that track performance figures.

Let’s look at an example. Imagine that our product is an online calculator. It has been up and running for several months now, and the calculations are 100% accurate. However, we have recently made some performance enhancements to the code: the calculated results should now display on the website twice as fast as before.

This means we will have two sets of tests to run. Of course, we would need to test the calculator’s performance to make sure the new code really is making results display twice as fast. But even more importantly, we need to test the accuracy of the calculator, to make sure that the new code has not caused unexpected calculation errors.  After all, to a customer, data accuracy is always more important than performance.

In this example, the testing of the performance is not considered regression testing as it is a new feature. Yes, it needs to be tested — but because it is being tested for the first time, that testing process is not considered regression testing. However, since the calculations should yield the same answers as before the code change, any testing performed that validates calculation accuracy is considered regression testing.

Why is Regression Testing Important?

Anytime a developer makes changes or enhancements to code, they are careful to not accidentally break or disrupt other functions of the application. However, application code is complex, and enterprise level application code can increase in complexity exponentially. Hundreds, if not thousands, of interdependencies can be in play. Even with the utmost care, problems still occur. It’s simply part of coding reality.

Knowing this, development teams perform regression testing to check and re-check the various components of the application following the release of any new code. This confirms (in theory) that their core functionality, performance and interdependencies still work correctly. They do this because the headlines are littered with big and small companies who have suffered major setbacks due to software glitches. Many, if not all, of these issues, could have been avoided with better testing. Comprehensive regression testing of an application with every release, before it goes live, is essential to avoid becoming the next headline.

Where does Regression Testing fit in the Development Lifecycle?

Whether you’re running an Agile or DevOps development environment, regression testing is always performed following integration testing and before user testing and deployment to production. While that part is pretty cut and dry, the bigger question is who is responsible for doing it. While it has traditionally been done by a dedicated tester, depending on your organization, it could be performed by a developer or even an automated test tool. There are even automated test tools that use artificial intelligence to perform regression testing. Typically, however, regression testing is performed by a test engineer who creates and manages the test cases, ensuring that all modules are effectively tested and working according to user specifications.

The key thing to remember is that regression testing is a key part of the release process — every bit as important as the development itself. It ensures that the application is working correctly, and that customers and business teams don’t experience problems that can get your organization in the news for the wrong reasons.

Regression Testing - Stages of Development and Testing 300w, 768w" sizes="(max-width: 979px) 100vw, 979px" />

Effective Regression Testing in a Continuous Delivery DevOps Environment?

The stages of testing (shown in the above diagram) are identical regardless of whether your organization uses Agile or DevOps. The key difference between them arguably comes down to minor changes in the development lifecycle, and who is responsible for the various stages of testing.

With the Waterfall methodology the various testing stages were clear cut and well defined by different individual and team roles. Developers developed, testers tested and managers managed. However, with the continuous delivery and continuous testing model of Agile, those well-defined lines have become blurred. As companies try to improve development life-cycle efficiency, the responsibilities associated with each stage have shifted left.

Regression Testing - DevOps Cycle 522w" sizes="(max-width: 330px) 100vw, 330px" />In practice, this mean that developers were now taking on a significant amount of the testing to expedite issue resolution and code release. Then organizations began implementing DevOps with the goal of improving efficiency even more, which blurred the lines of finite roles even further and the responsibilities shifted even further to the left in the cycle. This meant that developers would take on more testing, and testers would take on more development. In some cases, even tasks that would traditionally have been the responsibility of operations would now be performed by testers and developers — interacting directly with the customer, identifying issues, and then quickly resolving them.

This is as far as we’ll go in this discussion comparing these two methodologies. However, even with these improved efficiencies and shortened timelines, the stages of testing haven’t changed and are just as important as ever. The only way to develop quality software as quickly and efficiently as possible is to provide efficient, effective regression testing.

Scheduling Environments for Regression Testing

Application testing in a large enterprise needs to be performed in a wide variety of environments that mimic the production environment as closely as possible. There are many ways to do this. One of the more popular methods is called Service Virtualization. In simple terms, this creates a virtual environment that simulates the production environment. This allows developers and testers to test their new code in an environment as close to live production as possible.

While that is the simplified explanation of Service Virtualization, the reality is that a global enterprise can have thousands of artifacts that make up a single virtual test environment. This means that to effectively mimic the production environment, everything needs to be considered — hardware, firmware, software, code versions, networks, and much more. When you factor in the needed hardware, software, resources, and licensing needed for every instance, regardless of production or test, it becomes financially unrealistic to expect a dedicated license for every test group. That means that these licenses have to be coordinated, scheduled and reserved for different developers and testers throughout the organization.

Without tools like Environments from Plutora, the coordination and scheduling of these environments is no small feat. Multiple development and test teams, thousands of different artifacts, and all of their various interdependencies —without the proper tools this can quickly become completely unmanageable.

Creating a Test Plan for Regression Testing

A quality Agile or DevOps program needs organization and structure. The same goes for the testing effort regardless of who is specifically assigned to run the tests. To make sure the testing is effective and thorough, a test plan needs to be created. A good test plan has multiple functions. They are:

Consistency – When performing regression testing, it’s important that the entire application is tested thoroughly.  One aspect of this is to ensure that every part of the application is tested. The other aspect is to make sure that the different parts of the application are tested effectively, using the same tests, test variables, and produce the same test results. Keeping and following a test plan like this will not only improve the speed in which the tests can be performed, but also ensure the accuracy of the tests, and thus the quality of the application.

Show Testing Coverage – It’s almost impossible to show what parts of an application will be tested, or have been tested, without having a test plan to point to. It’s like a coloring book, showing an outline of what needs to be tested. As each test is performed, one section at a time is colored in. This makes it easy to see the status of the regression testing at any time, and if necessary, where to pick-up and continue the tests.

Continuity – A good test plan should be completely transferable from one person to another. It should include every detail on how to exactly recreate the necessary test environment, set up the test scenario, perform the test, define data inputs, and identify what the results should be. This level of documentation will ensure continuity from one test series to another — and from one Test Engineer to another in case of reassignment.

Speed – A good test plan significantly improves the speed and efficiency in which a regression test can be performed. This is because all the necessary tests and their respective details are laid out in a well thought out structure that is easy to read, duplicate and execute.

Audit Trail – Enterprise organizations deal with high stakes. At some point, it will be necessary to have an audit trail to show what was done, by who, and when — whether to respond to a trivial query or to comply with a major investigation. A good test plan will show exactly that information.

Accountability – Each test or test section should have space for the person who is performing the tests to record their name and date. This assigns an owner to each section who states “I have tested this application according to these recorded tests, and approve of it going into production.” When accountability is assigned in this way, that person takes a greater level of pride and responsibility in their work.

Maintaining an Effective Test Plan

Just as the code for an enterprise application is constantly changing and improving, effective regression testing on that application must also keep up with and adapt to the changes of the application. The regression test plans need to be maintained, not only to reflect new changes in the application code, but also to become iteratively more effective, thorough, and efficient. A Test Plan should be considered a living document.

Regression Testing Tools and Solutions

Automated Test
There are a variety of automated software testing tools to consider. They each have various claims of superiority. So, depending on your organization’s application, interface, network or methodology, there is more than likely an automated test solution waiting for you. For example, if you were needing to regularly perform regression tests on an online interface, Selenium, which specializes in web browser automation, would be a good solution to consider.

Test Management
Understanding what will be tested, won’t be tested, or has already been tested, can be challenging enough within a single test team. But to maintain visibility across an entire global enterprise that spans dozens of development teams is a challenge on a completely different scale. Whether Planning, executing manual, or automated test plans and scripts, Plutora Test takes care of it. Test interfaces directly with thousands of testing tools (like the above-mentioned Selenium) to both simplify and improve the efficiency of the overall execution and management of the testing process.

Environment Management
For regression testing to be accurately performed on new application code, it needs to be tested in an environment that accurately mimics the production environment as closely as possible. For a global enterprise environment, this means mimicking thousands, if not millions of customers, and a dizzying network of servers, applications, firmware versions… a daunting task to say the least. These environments then need to be provisioned time and time again to meet the needs of the various development teams. It’s in this type of scenario that Plutora Environments shines. It effectively manages not only the environments but each of the thousands of different artifacts that comprise them. This allows you to quickly, reliably and repeatedly provision test environments that mimic every detail of the production environments, but without the risk.

The post The Regression Testing Solution for DevOps appeared first on Plutora.

Read the original blog entry...

More Stories By Plutora Blog

Plutora provides Enterprise Release and Test Environment Management SaaS solutions aligning process, technology, and information to solve release orchestration challenges for the enterprise.

Plutora’s SaaS solution enables organizations to model release management and test environment management activities as a bridge between agile project teams and an enterprise’s ITSM initiatives. Using Plutora, you can orchestrate parallel releases from several independent DevOps groups all while giving your executives as well as change management specialists insight into overall risk.

Supporting the largest releases for the largest organizations throughout North America, EMEA, and Asia Pacific, Plutora provides proof that large companies can adopt DevOps while managing the risks that come with wider adoption of self-service and agile software development in the enterprise. Aligning process, technology, and information to solve increasingly complex release orchestration challenges, this Gartner “Cool Vendor in IT DevOps” upgrades the enterprise release management from spreadsheets, meetings, and email to an integrated dashboard giving release managers insight and control over large software releases.

Latest Stories
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Serverless Architecture is the new paradigm shift in cloud application development. It has potential to take the fundamental benefit of cloud platform leverage to another level. "Focus on your application code, not the infrastructure" All the leading cloud platform provide services to implement Serverless architecture : AWS Lambda, Azure Functions, Google Cloud Functions, IBM Openwhisk, Oracle Fn Project.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
Serverless applications increase developer productivity and time to market, by freeing engineers from spending time on infrastructure provisioning, configuration and management. Serverless also simplifies Operations and reduces cost - as the Kubernetes container infrastructure required to run these applications is automatically spun up and scaled precisely with the workload, to optimally handle all runtime requests. Recent advances in open source technology now allow organizations to run Serv...
Serverless Computing or Functions as a Service (FaaS) is gaining momentum. Amazon is fueling the innovation by expanding Lambda to edge devices and content distribution network. IBM, Microsoft, and Google have their own FaaS offerings in the public cloud. There are over half-a-dozen open source serverless projects that are getting the attention of developers.
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform....
If you are part of the cloud development community, you certainly know about “serverless computing,” almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to the serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust tha...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...