SYS-CON MEDIA Authors: Dana Gardner, Pat Romanski, Zakia Bouachraoui, Liz McMillan, Elizabeth White

Blog Feed Post

Continuous Testing at the Speed of DevOps

"Even if you've got a Maserati, you need a good driver who knows how to drive it. Speed is important, but safety and accuracy are also key" – Bob Aiello

Parasoft recently hosted a "Continuous Testing in the DevOps World" webinar featuring Bob Aiello, Technical Editor for CM Crossroads and Wayne Ariola, Parasoft Chief Strategy Officer and co-author of Continuous Testing. Since the webinar generated such overwhelming attendance and response, we thought we'd highlight some of the key points here.
 

What is DevOps?

DevOps is a set of principles and practices that helps you communicate and collaborate more effectively. Why? Because developers know a lot of very important information. So do QA testers… and so does Operations. If your organization is going to be successful, you need to be able to harness all of that knowledge and experience. A key part of DevOps is creating the "automated deployment pipeline" –the ability to deploy changes (bug fixes, new features, infrastructure changes, etc.) as often as needed, with absolute reliability. The ability to get it right each and every time is really key. Companies that master this gain a very important strategic advantage. Those that don't incur a huge amount of risk.

What is Continuous Testing?

Per the Continuous Testing Wikipedia page:

"Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. For Continuous Testing, the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals."

Are we done testing? Wrong question!

When assessing the risk of a release candidate, most software teams ask the question:  “Are we done testing?”  Fundamentally, this is the wrong question. 

Especially with DevOps and Continuous Delivery, releasing with both speed and confidence requires having immediate feedback on the business risks associated with a software release candidate. Given the rising cost and impact of software failures, you can't afford to unleash a release that could disrupt the existing user experience or introduce new features that expose the organization to new security, reliability, or compliance risks. To prevent this, the organization needs to extend from validating bottom-up requirements to assessing the system requirements associated with overarching business goals.

Instead of “Are we done testing,” software development teams should be asking: “Does the release candidate have an acceptable level of business risk?” If we can answer this question, we can determine if the application is truly ready to progress through the delivery pipeline at any given time.

What will the next generation of testing look like?

The overarching goal will be to close the gap between business expectations and dev/test activities. First off, we'll have to bring operations (runtime) data associated with the actual user experience into our testing environment. We'll also use simulation to enable testing to begin early and continue to run at the necessary frequency. We won't stop testing requirements or performing bottom-up verification of changes. This still needs to occur, but it needs to be designed into a context so that it's integrated into a continuous regression suite that's validated vs. the user experience. Understanding how new functionality impacts the broader system is key for accurately assessing the total risk of the release candidate.

All tests need to be placed into a regression suite that measures business objectives, then "process intelligence" can be used to better understand the impact of change. This will guide us to two things. First, better exploratory testing, where corner cases and more investigative testing can happen on more variable outcomes. Second, automated acceptance testing. This is designed to ensure that there's no negative impact to today's user experience.
 

What are key factors in moving from automated testing to Continuous Testing?

Automated testing involves automated, CI-driven execution of whatever set of tests the team has accumulated. However, if one of these tests fails, what does that really mean: does it indicate a critical business risk, or just a violation of some naming standard that nobody is really committed to following anyway? And what happens when it fails? Is there a clear workflow for prioritizing defects vs. business risks and addressing the most critical ones first? And for each defect that warrants fixing, is there a process for exposing all similar defects that might already have been introduced, as well as preventing this same problem from recurring in the future?  This is where the difference between automated and continuous becomes evident.

To evolve from automated to continuous, you need the following:

  1. Clearly defined business expectations, with business risks identified per application, team, and release.
  2. Defects automatically prioritized versus the business drivers and knowing how to mitigate those risks before the release candidate goes live.
  3. Testing in complete test environments continuously using simulation—this is critical for protecting the current user experience from the impact of change.
  4. Feedback loop for defect prevention—looking for patterns that emerge and using this as an opportunity to design and implement defect prevention practices that prevent similar defects from being introduced.
      

How does Service Virtualization fit into DevOps and Continuous Testing?

After organizations start accelerating their software delivery pipeline, they often reach the point where they need to test, but can't exercise the AUT because a complete test environment is not yet ready. A lot of teams use simulation technologies such as service virtualization to get around these roadblocks. For example, say your mainframe is only available for 2 hours on Saturday night. You can record your AUT's interactions with it, then capture this as a virtual asset. You can do the same for a database, 3rd-party application, SAP, etc.

Simulation technologies are not perfect. But in order for us to start to exercise the "big blocks" and get those risks out of the way, we need to begin exploratory testing . . . and we can't do this without simulation. To truly protect the end user experience, we need to aggressively test and defend the end user's experience across key end-to-end transactions. With today's systems, those transactions pass through a high number of different components, so it's very difficult to accommodate that in a single staged test environment—cloud or not. Simulation helps us get around this.  For the most realistic simulated environment, we need to really understand how components are working in an operational environment and transfer this to the simulation.

Watch the "Continuous Testing in the DevOps World" Webinar On Demand

Want to learn more? You can watch the complete 60-minute webinar at your convenience

 

Read the original blog entry...

More Stories By Wayne Ariola

Wayne Ariola is Vice President of Strategy and Corporate Development at Parasoft, a leading provider of integrated software development management, quality lifecycle management, and dev/test environment management solutions. He leverages customer input and fosters partnerships with industry leaders to ensure that Parasoft solutions continuously evolve to support the ever-changing complexities of real-world business processes and systems. Ariola has more than 15 years of strategic consulting experience within the technology and software development industries. He holds a BA from the University of California at Santa Barbara and an MBA from Indiana University.

Latest Stories
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City.
In an age of borderless networks, security for the cloud and security for the corporate network can no longer be separated. Security teams are now presented with the challenge of monitoring and controlling access to these cloud environments, at the same time that developers quickly spin up new cloud instances and executives push forwards new initiatives. The vulnerabilities created by migration to the cloud, such as misconfigurations and compromised credentials, require that security teams t...
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
Platform-as-a-Service (PaaS) is a technology designed to make DevOps easier and allow developers to focus on application development. The PaaS takes care of provisioning, scaling, HA, and other cloud management aspects. Apache Stratos is a PaaS codebase developed in Apache and designed to create a highly productive developer environment while also supporting powerful deployment options. Integration with the Docker platform, CoreOS Linux distribution, and Kubernetes container management system ...
Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. It even comes with Prometheus to store the metrics for you and pre-built Grafana dashboards to show exactly what is important for your services - success rate, latency, and throughput. In this session, we'll explain what Linkerd provides for you, demo the installation of Linkerd on Kubernetes and debug a real world problem. We will also dig into what functionality you can build on ...
The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions. New research shows that delivering on multicloud e...
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
Take advantage of autoscaling, and high availability for Kubernetes with no worry about infrastructure. Be the Rockstar and avoid all the hurdles of deploying Kubernetes. So Why not take Heat and automate the setup of your Kubernetes cluster? Why not give project owners a Heat Stack to deploy Kubernetes whenever they want to? Hoping to share how anyone can use Heat to deploy Kubernetes on OpenStack and customize to their liking. This is a tried and true method that I've used on my OpenSta...
DevOps is a world surrounded by information, starting from a single commit and ending in roll out to production. In this talk, I'll introduce you to the world of Taboola DevOps data collection, to better understand what goes on under the hood. The system we've developed in-house helps us collect and analyse the entire DevOps process from the very first commit all the way to production. It provides us a full clear view with a drill-down toolset that helps keep us away from the dark side. ...
We at Capgemini have developed a cloud-native PaaS Solution called "Apollo". Apollo is built on top of following open source components. - Apache Mesos for cluster management, scheduling & resource isolation - Marathon or Kubernetes for Container orchestration - Docker for application container runtime, - Consul for service discovery via DNS - Weave for networking of Docker Containers - Traefik for application container load balancing
After years of investments and acquisitions, CloudBlue was created with the goal of building the world's only hyperscale digital platform with an increasingly infinite ecosystem and proven go-to-market services. The result? An unmatched platform that helps customers streamline cloud operations, save time and money, and revolutionize their businesses overnight. Today, the platform operates in more than 45 countries and powers more than 200 of the world's largest cloud marketplaces, managing mo...
With digital video content creation going viral and assuming the bulk of Internet traffic, how can the deluge of video content be analyzed effectively to derive insights and ROI? After all, video is not only huge in size, but it is complex given various visual, audio and temporal elements. Video summarization (a mechanism for generating a short video summary via key frame analysis or video skimming) has become a popular research topic industry-wide and across academia. Video thumbnail generation...