SYS-CON MEDIA Authors: Pat Romanski, Liz McMillan, Yeshim Deniz, Elizabeth White, Courtney Abud

Blog Feed Post

The broken Devops production line

Let's say a raw material has to go through 4 sequential steps in a production line to become a finished product. All the steps use different equipments and have different processing capacity. Simple math will tell us that the step with the lowest processing capacity (volume / time) will determine the processing capacity of the entire line. Any attempt to increase the capacity of a step other than the weakest one will not result in any positive outcome for the line.

How do we increase the processing capacity of the weakest step? We can either reduce the complexity of the task or increase the capability of the equipment or re-engineer the process step to become more productive.  

Fundamentally, there is no other way.
As we strengthen the weakest step, the constraint will move to the next weakest one.

This means that in any process flow, where the processing capacity of individual steps is different, all the steps have to be in sync with the weakest step. If not, either there would be idle time or in-process inventory.

A theoretical solution to this is just to have one step and one highly capable equipment which can carry out all the process steps in one go. This will ensure that the equipment is fully utilized (no idle time) and there is no inventory.

Of course, we have to bear in mind that there are physical and logical limits to complexity of tasks which single equipment can handle.

Let's see if things change if we replace equipment with people.

Again, the person with the least processing capacity will determine the speed of the line. Please keep in mind that he might not be the least skilled person. Processing capacity is a function of both task complexity and skill - so it might be that this person is handling the most complex task.

People with more processing capacity (more skill or easier tasks) will have to be in sync with the one with the least capacity. If they are not, they will either be idle or producing more in-process inventory / WIPs.

So far, there is actually no difference between equipment and people.

However, there is an obvious one.

Instead of being idle or producing more inventory, people in the chain with higher processing capacity can COLLABORATE to speed up the person which is moving the slowest. Unlike equipments which cannot go beyond their defined scope, people can. They can, if they want to, team up to strengthen the weakest link and by doing so strengthen the entire chain.

You don't always need the same skill to help a person. This means that you don’t have to be a developer to help a developer or a tester to help a tester. A developer, or for that matter a BA, an architect, a systems administrator, a release manager or anybody else, can for instance speed up the work of a tester by sharing critical and timely information or by helping in some other coordination activities which do not require deep testing skills.

The important thing is that if the team in the chain is (1) aware of the weakest link and (2) is incentivized to strengthen the overall chain instead of their own individual links, the output would be far more than the sum total of their individual contributions.

In a waterfall world, people worked like equipments - each focused on their given task, just taking inputs and churning outputs and not really aware or bothered about the larger production line.

This is the very reason Scrum works. By getting cross functional people together and aligning their efforts towards a common 'production line' you're encouraging them to collaborate. Focus is on making the entire production line move and not on maximizing the efficiency of individual process steps.

As organizations now move towards a 'Devops' world, it is very critical that this extended production line is formalized and made visible. Agile, to a large extent, made the production line from Requirements to Testing visible. Now, the onus is on Devops to integrate Release, Support, Operations and Infrastructure teams to the same line.

My personal observation based on interactions with multiple customers trying to start off with 'Devops' is that there is too much attention on automation and too less on the broken 'production line' between Dev and Ops. Dev and Ops need to be incentivized to man a common production line and not separate ones. Their goals need to be aligned. The processing capacity of the different steps needs to be synced up so that there is continuous flow, instead of excessive idle time or in-process inventory. Only when their goals are aligned, they will collectively team up to strengthen the 'weakest' step.

The automation discussion should happen post this. With no production line visible and with no awareness about the relative processing capacity of the different steps in this line, automation investments can be shots in the dark - the blast will be heard and the flash of light will be seen - but there is no knowing whether the target was hit !

Read the original blog entry...

More Stories By Sujoy Sen

Sujoy is a TOGAF Certified Enterprise Architect, a Certified Six Sigma Black Belt and Manager of Organizational Excellence from American Society for Quality, a PMP, a CISA, an Agile Coach, a Devops Evangelist and lately, a Digital enthusiast. With over 20 years of professional experience now, he has led multiple consulting engagements with Fortune 500 customers across the globe. He has a Masters Degree in Quality Management and a Bachelors in Electrical Engineering. He is based out of New Jersey.

Latest Stories
Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists: Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle; Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco, and Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Lo...
With the new Kubernetes offering, ClearDATA solves one of the largest challenges in healthcare IT around time-to-deployment. Using ClearDATA's Automated Safeguards for Kubernetes, healthcare organizations have access to the container orchestration to dynamically deploy new containers on demand, monitor the health of each container for threats and seamlessly roll back faulty application updates to a previous version, avoid system-wide downtime and ensure secure continuous access to patient data.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
Platform9, the open-source-as-a-service company making cloud infrastructure easy, today announced the general availability of its Managed Kubernetes service, the industry's first infrastructure-agnostic, SaaS-managed offering. Unlike legacy software distribution models, Managed Kubernetes is deployed and managed entirely as a SaaS solution, across on-premises and public cloud infrastructure. The company also introduced Fission, a new, open source, serverless framework built on Kubernetes. These ...
Emil Sayegh is an early pioneer of cloud computing and is recognized as one of the industry's true veterans. A cloud visionary, he is credited with launching and leading the cloud computing and hosting businesses for HP, Rackspace, and Codero. Emil built the Rackspace cloud business while serving as the company's GM of the Cloud Computing Division. Earlier at Rackspace he served as VP of the Product Group and launched the company's private cloud and hosted exchange services. He later moved o...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, will discuss how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galer...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that pro...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to ...