SYS-CON MEDIA Authors: Zakia Bouachraoui, Liz McMillan, Yeshim Deniz, Janakiram MSV, Carmen Gonzalez

Blog Feed Post

How Often Should You Check Your Backups?

We are regularly told that checking our own bodies for signs of change is a good thing.  Early diagnosis of disease gives more of a fighting chance of curing the problem.  So, in the IT world, where we assume all of our backups have been taken successfully, how often should we be checking the results and ensuring the backup will work on the fateful day we need to do a restore?  This question was posed by Federica Monsone on Twitter this week.  Here’s an attempt to provide an answer.

First of all, let’s consider the whole point of taking backups.  Excluding the inappropriate use of backup for archiving, the backup process is there to ensure you can maintain continuous access to your data in the event of unforseen circumstances.  Usually (but not exclusively) these are data loss due to equipment or power failure, data corruption (whether software bug or malicious), accidental deletion or a need to return to a previous point in time for consistency purposes where there are multiple interrelated systems.

Backups will be used infrequently and inevitably, like insurance, you never know how good your backups are until you come to use them.  I wouldn’t advocate crashing your car just to check your insurers will pay out, however periodic validation of backups and more importantly – the restore process - is a good thing.  Why?  Because in a recovery scenario, you want to be confident your backups have worked and that the process of recovery will be as smooth as possible.  Recovering data is typically a time-critical operation.  You’re recovering data because somebody needs the information quickly, or because a system is down.  When the pressure is on, the process needs to work flawlessly.  In addition to the time pressures, restores should be periodically checked because:

  • Backup media deteriorates over time; you should be ensuring any failing media is replaced.
  • Backup software upgrades can cause issues with restores is data formats are changed.
  • Server software upgrades can cause issues with restores.

So, to the heart of the matter, how often to test restores.  I believe restore testing should be based on the criticality of the data and of the complexity of the backup infrastructure.  So, if data integrity is the essence of your business, test restores more often.  If you have a shared backup infrastructure, test restores from that; if you have a more distributed design, you’ll need to test each backup component.  Here are some thoughts:

  • Test restores of individual files on a weekly basis
  • Test restores of large volumes of data on a monthly basis
  • Test whole system restores 1/2 yearly
  • Randomly select media for restore; choose new and old media alike
  • Test restores into your DR site (if you have one)
  • Replace faulty media immediately
  • Have a media retirement policy
  • Have a backup onboarding policy

There’s no right or wrong way to approach testing restores; it’s all about building confidence in the restore process, so when you need it, you can be happy it will work for you.

Google Buzz

Read the original blog entry...

Latest Stories
After years of investments and acquisitions, CloudBlue was created with the goal of building the world's only hyperscale digital platform with an increasingly infinite ecosystem and proven go-to-market services. The result? An unmatched platform that helps customers streamline cloud operations, save time and money, and revolutionize their businesses overnight. Today, the platform operates in more than 45 countries and powers more than 200 of the world's largest cloud marketplaces, managing mo...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio a...
Containerized software is riding a wave of growth, according to latest RightScale survey. At Sematext we see this growth trend via our Docker monitoring adoption and via Sematext Docker Agent popularity on Docker Hub, where it crossed 1M+ pulls line. This rapid rise of containers now makes Docker the top DevOps tool among those included in RightScale survey. Overall Docker adoption surged to 35 percent, while Kubernetes adoption doubled, going from 7% in 2016 to 14% percent.
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to ...
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpd...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infra...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.
Public clouds dominate IT conversations but the next phase of cloud evolutions are "multi" hybrid cloud environments. The winners in the cloud services industry will be those organizations that understand how to leverage these technologies as complete service solutions for specific customer verticals. In turn, both business and IT actors throughout the enterprise will need to increase their engagement with multi-cloud deployments today while planning a technology strategy that will constitute a ...
While more companies are now leveraging the cloud to increase their level of data protection and management, there are still many wondering “why?” The answer: the cloud actually brings substantial advancements to the data protection and management table that simply aren’t possible without it. The easiest advantage to envision? Unlimited scalability. If a data protection tool is properly designed, the capacity should automatically expand to meet any customer’s needs. The second advantage: the ...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected pat...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
In today's always-on world, customer expectations have changed. Competitive differentiation is delivered through rapid software innovations, the ability to respond to issues quickly and by releasing high-quality code with minimal interruptions. DevOps isn't some far off goal; it's methodologies and practices are a response to this demand. The demand to go faster. The demand for more uptime. The demand to innovate. In this keynote, we will cover the Nutanix Developer Stack. Built from the foundat...
ShieldX's CEO and Founder, Ratinder Ahuja, believes that traditional security solutions are not designed to be effective in the cloud. The role of Data Loss Prevention must evolve in order to combat the challenges of changing infrastructure associated with modernized cloud environments. Ratinder will call out the notion that security processes and controls must be equally dynamic and able to adapt for the cloud. Utilizing four key factors of automation, enterprises can remediate issues and impro...
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City.