SYS-CON MEDIA Authors: Pat Romanski, Gary Arora, Zakia Bouachraoui, Yeshim Deniz, Liz McMillan

Blog Feed Post

Rebuilding After Disaster: DevOps is the first step

Picture compliments of FS.comhttp://www.stacki.com/wp-content/uploads/2016/05/DC.Flood_.jpg 600w" sizes="(max-width: 300px) 100vw, 300px" />

Not our flooded DC, but similar.

If you’ve ever stood in the ruins of what was once your datacenter and pondered how much work you had to do and how little time you had to do it, then you probably nodded at the title. If you have ever worked to get as much data off of a massive RAID array as possible with blinking red lights telling you that your backups should have been better, you too probably nodded at the title.

It is true that I have experienced both of these situations. A totally flooded datacenter (caused by broken pipes in the ceiling) set us to scrambling so that we could put together something while we restored normal function. The water had come through the ceiling and was a couple feet deep, so the destruction was pretty thorough. In a different role, a RAID array with many disks lost one disk, and before our service contractor came to replace that disk (less than 24 hours), two more went. Eventually, as more people than just us had problems, the entire batch of disks that this RAID devices’ drives came out of was determined to be faulty. Thing is, a ton of our operational intelligence was on those disks in the form of integrations – the system this array was for knitted together a dozen or so systems on several OS/DB combinations, and all the integration code was stored on the array. The system was essentially the cash register of the organization, so downtime was not an option. And I was the manager responsible.

Both of these scenarios came about before DevOps was popular, and in both scenarios we had taken reasonable precautions. But when the fire was burning and the clock was ticking, our reasonable precautions weren’t good enough to get us up and running (even minimally) in a short amount of time. And that “minimally” is massively important. In the flood scenario, the vast majority of our hardware was unusable, and in a highly dynamic environment, some of our code – and even purchased packages – was not in the most recent set of backups. That last bit was true with the RAID array also. We were building something that had never been done before at the scale we were working on, so change was constant, new data inputs were constant, and backups – like most backups – were not continuous.

With DevOps, these types of disasters are still an issue, some of the problems we had will still have to be dealt with, but one of the big issues we ran into – getting new hardware, getting it installed, getting apps on it, and getting it running so customers/users could access something is largely taken care of.

With provisioning – server and app – and continuous integration, the environment you need can be recreated in a short amount of time, assuming you can get hardware to use it on, or are able to use it either hosted or in the cloud for the near term.

Assuming that you are following DevOps practices (I’d say “best practices”, but this is really more fundamental than that), you have configuration and reinstall information in GitHub or Bitbucket or something similar. So getting some of your services back online becomes a case of downloading and installing a tool like Stacki or Cobbler, hooking it to a tool like Puppet or SaltStack, and getting your configuration files down to start deploying servers from RAID to app.

Will it be perfect? Not at all. If your organization has gone all-in and has network configuration information in a tool like Puppet with the Cisco or F5 plugins, for example, it is highly unlikely that short-term network gear while you work things out with an insurance company is going to be configurable by that information. But having DevOps in place will save you a lot of time, because you don’t have to rebuild everything by hand.

And trust me, at that instant, the number one thing you will care about is “How fast can I get things going again?” knowing full well that the answer to that question will be temporary while the real problems are dealt with. Anything that can make that process easier will help, you will already be stressed trying to get someone – be it vendor reps for faulty disk drives or insurance reps for disasters – out to help you do the longer-term recovery, the short term should be as automatic as possible.

Surprisingly, I can’t say “I hope you never have to deal with this”. It is part of life in IT, and I honestly learned a ton from it. The few thousand lines of code and tens of thousands of billable data we lost with the RAID issue was an expensive lesson, but we came out stronger and more resilient. The flooded datacenter gave me a chance to deal with insurance on a scale most people never have to, and (with the help of the other team members of course) to build a shiny new datacenter from the ground up – something we all want to do. But if you have a choice, avoid it. Since you don’t generally have a choice, prepare for it. DevOps is one way of preparing.

 

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
Every organization is facing their own Digital Transformation as they attempt to stay ahead of the competition, or worse, just keep up. Each new opportunity, whether embracing machine learning, IoT, or a cloud migration, seems to bring new development, deployment, and management models. The results are more diverse and federated computing models than any time in our history.
On-premise or off, you have powerful tools available to maximize the value of your infrastructure and you demand more visibility and operational control. Fortunately, data center management tools keep a vigil on memory contestation, power, thermal consumption, server health, and utilization, allowing better control no matter your cloud's shape. In this session, learn how Intel software tools enable real-time monitoring and precise management to lower operational costs and optimize infrastructure...
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Enterprises are striving to become digital businesses for differentiated innovation and customer-centricity. Traditionally, they focused on digitizing processes and paper workflow. To be a disruptor and compete against new players, they need to gain insight into business data and innovate at scale. Cloud and cognitive technologies can help them leverage hidden data in SAP/ERP systems to fuel their businesses to accelerate digital transformation success.
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions.
Concerns about security, downtime and latency, budgets, and general unfamiliarity with cloud technologies continue to create hesitation for many organizations that truly need to be developing a cloud strategy. Hybrid cloud solutions are helping to elevate those concerns by enabling the combination or orchestration of two or more platforms, including on-premise infrastructure, private clouds and/or third-party, public cloud services. This gives organizations more comfort to begin their digital tr...
Keeping an application running at scale can be a daunting task. When do you need to add more capacity? Larger databases? Additional servers? These questions get harder as the complexity of your application grows. Microservice based architectures and cloud-based dynamic infrastructures are technologies that help you keep your application running with high availability, even during times of extreme scaling. But real cloud success, at scale, requires much more than a basic lift-and-shift migrati...
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, low-cost, and reliable cloud storage. Prior to Wasabi, David co-founded Carbonite, one of the world's leading cloud backup companies. A successful tech entrepreneur for more than 30 years, David got his start at ARP Instruments, a manufacturer of synthesizers for rock bands, where he worked with leading musicians of the day like Stevie Wonder, Pete Townsend of The Who, and Led Zeppelin. David has ...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Addteq is a leader in providing business solutions to Enterprise clients. Addteq has been in the business for more than 10 years. Through the use of DevOps automation, Addteq strives on creating innovative solutions to solve business processes. Clients depend on Addteq to modernize the software delivery process by providing Atlassian solutions, create custom add-ons, conduct training, offer hosting, perform DevOps services, and provide overall support services.