|By Lori MacVittie||
|October 25, 2014 02:00 PM EDT||
Kirk Byers at SDN Central writes frequently on the topic of DevOps as it relates (and applies) to the network and recently introduced a list of seven DevOps principles that are applicable in an article entitled, "DevOps and the Chaos Monkey. " On this list is the notion of reducing variation.
This caught my eye because reducing variation is a key goal of Six Sigma and in fact its entire formula is based on measuring the impact of variation in results. The thought is that by measuring deviation from a desired outcome, you can immediately recognize whether changes to a process improve the consistency of the outcome.Quality is achieved by reducing variation, or so the methodology goes.
This stems from Six Sigma's origins in lean manufacturing, where automation and standardization are commonly used to improve the quality of products produced, usually by reducing the number of defective products produced.
This is highly applicable to DevOps and the network, where errors are commonly cited as a significant contributor to lag in application deployment timelines caused by the need to troubleshoot such errors. It is easy enough to see the relationship: defective products are not all that much different than defective services, regardless of the cause of the defect.
Number four on Kirk's list addresses this point directly:
#4: Reduce variation.
Variation can be good in some contexts, but in the network, variation introduces unexpected errors and unexpected behaviors.
Whether you manage dozens, hundreds, or thousands of network devices, how much of your configuration can be standardized? Can you standardize the OS version? Can you minimize the number of models that you use? Can you minimize the number of vendors?
Variation increases network complexity, testing complexity, and the complexity of automation tools. It also increases the knowledge that engineers must possess.
Obviously, there are cost and functional trade-offs here, but reducing variation should at least be considered.
What Kirk is saying without saying, is that standardization improves consistency in the network. That's no surprise, as standardization is a key method of reducing operational overhead. Standardization (or "reducing variation" if you prefer) achieves this by addressing network complexity that contributes heavily to operational overhead and variation in outcome (aka errors).
That's because a key contributor to network complexity is the sheer number of boxes that make up the network and complicate topology. These boxes are provisioned and managed according to their unique paradigm, and thus increase the burden on operations and network teams by requiring familiarity with a large number of CLIs, GUIs and APIs. Standardization on a common platform relieves this burden by providing a common CLI, GUI and set of APIs that can be used to provision, manage and control critical services. The shift to a modularized architecture based on a standardized platform increases flexibility and the ability to rapidly introduce new services without incurring the additional operational overhead associated with new, single service solutions. It reduces variation in provisioning, configuration and management (aka architectural debt).
On the other hand, SDN tries to standardize network components through the use of common APIs, protocols, and policies. It seeks to reduce variation in interfaces and policy definitions so components comprising the data plane can be managed as if they were standardized. That's an important distinction, though one that's best left for another day to discuss. Suffice to say that standardization at the API or model layer can leave organizations with significantly reduced capabilities as standardization almost always commoditizes functions at the lowest common set of capabilities.
That is not to say that standardization at the API or protocol layer isn't beneficial. It certainly can and does reduce variation and introduce consistency. The key is to standardize on APIs or protocols that are supportive of the network services you need.
What's important is that standardization on a common service platform can also reduce variation and introduce consistency. Applying one or more standardization efforts should then, ostensibly, net higher benefits.
Feb. 1, 2015 05:00 PM EST Reads: 868
Feb. 1, 2015 04:15 PM EST Reads: 3,297
Feb. 1, 2015 04:15 PM EST Reads: 3,075
Feb. 1, 2015 03:45 PM EST Reads: 3,203
Feb. 1, 2015 02:45 PM EST Reads: 2,654
Feb. 1, 2015 02:45 PM EST Reads: 1,785
Feb. 1, 2015 02:30 PM EST Reads: 2,810
Feb. 1, 2015 02:00 PM EST Reads: 2,119
Feb. 1, 2015 02:00 PM EST Reads: 1,094
Feb. 1, 2015 01:45 PM EST Reads: 2,489
Feb. 1, 2015 01:45 PM EST Reads: 2,024
Feb. 1, 2015 01:30 PM EST Reads: 2,391
Feb. 1, 2015 01:15 PM EST Reads: 2,808
Feb. 1, 2015 01:00 PM EST Reads: 3,054
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of present...
Feb. 1, 2015 01:00 PM EST Reads: 2,122