Click here to close now.

SYS-CON MEDIA Authors: Elizabeth White, Pat Romanski, Yeshim Deniz, Liz McMillan, Roger Strukhoff

Related Topics: @DevOpsSummit, Java, Microservices Journal, Linux, Cloud Expo, Big Data Journal

@DevOpsSummit: Article

The DevOps Database | Part 4

A culture of continual experimentation and learning

In the final post in this series about bringing DevOps patterns to database change management, we’re going to discuss the Third Way.  Here’s a refresher on the Third Way from the introductory post in this series:

The Third Way: Culture of Continual Experimentation & Learning – This way emphasizes the benefits that can be realized through embracing experimentation, risk-taking, and learning from failure.  By adopting this kind of attitude, experimentation and risk-taking lead to innovation and improvement while embracing failure allows the organization to produce more resilient products and sharpen skills that allow teams to recover more quickly from unexpected failure when it does occur.

The Third Way is by far the most intriguing of the “The Ways” to me.  I’ve spent the lion’s share of my career in early stage start-ups where cycles of experimentation, learning, and failure are the norm.  When bringing a new product or service to market, your latest release is never your last.  It may not even be the last release this week.  You are constantly experimenting with new workflows and technology, learning about your target market, and getting more valuable information from your early failures than your early successes.  The Third Way is crucial to the success of an early stage company.

While the benefits of the Third Way still apply to more established companies and product lines, practicing it becomes more difficult.  The potential negatives of experimentation and risk-taking are much harder to stomach when you have a large base of paying customers with SLAs. This aversion to risk is most acute when you’re talking about your data platform where outages, performance problems and data loss are not an option. Complicating matters further is how difficult it can be to unwind the database changes that were affected to support a specific version of your app.  Application code can usually be uninstalled and replaced with the previous working version fairly simply should problems arise. Reverting the database changes that support that version of the application is more akin to defusing a bomb.  Database changes must be reverted delicately and meticulously to avoid errors and omissions that could negatively impact your data platform.

What DBAs and release managers need to facilitate experimentation and risk taking on the data platform is a special combination of tools and process.  This combination should make it easy to identify the root cause of issues, quickly remediate problems caused by application schema structure, and revert to a previous version of the schema safely.  When we started Datical, we spent several hours in conversation and at white boards exploring these unique needs and hammering out a path to usher the Third Way into regular database activity.

A Rollback Designed With Every Change
The biggest problem with experimentation in the data platform is how difficult it is to move backward and forward through your schema’s version history.  We feel the best way to bring more flexibility to the process of upgrading and reverting schema is an attitude shift.  Your rollback strategy for each database change must become as important as the change itself.  The best time to craft your rollback strategy is when the change itself is being designed.  When the motivation for the change is fresh in your mind and the dependencies of the object being created, dropped or altered are clearly mapped out, a developer or DBA can better craft a rollback strategy for which every contingency has been considered.  This leads to a stronger safety net and makes your application schema as agile and easily managed as your application code. The database is no longer preventing you from being bold but quickly and safely moving between versions to accommodate experimentation.

The Richness of Model Based Comparison & Remediation
I’m a native Texan, born and raised in Austin.  I know the jokes about how proud and vocal Texans can be about their home state.  That being said, I spend more time telling people about the model based approach Datical has applied to Database Change Management than I do telling them how wonderful it is that I was fortunate enough to be born in the best state in the country.  The advantages of the model based approach really shine through when it comes to experimentation and troubleshooting.  The model allows you to annotate all objects and modifications to your application’s schema with the business reason that prompted them.  This detailed history is invaluable when designing new changes or refactoring your schema as part of an experimental exercise.  You immediately know what objects are most crucial, what your dependencies are, what areas you need to tread lightly in, and what areas are ripe for experimentation due to the changing needs of your business.  Designing intelligently eliminates risk.

Troubleshooting with models is also dramatically faster and more reliable than other methods. Programmatically comparing models allows you to determine the differences between two databases much more quickly than manually comparing diagrams or SQL scripts.  You know with certainty exactly what has changed and what is missing in a fraction of the time that human review takes.

Once you have identified the differences, remediation is as simple plugging one, some or all of the determined differences into the model and deploying those changes to the non-compliant instance.

Flexible Quick Rollback
If you’ve taken our advice to implement your rollback strategy when you implement a change, recovering from disaster becomes testable, fast, and simple.  Before going to production or a sensitive environment, you should always test your rollback steps in dev, test and staging.  This will allow you to make any tweaks or changes to your rollback strategy before you are in a pinch.  Think of it like testing your smoke alarm.  Hopefully you’ll never need it, but it’s nice to know that it’ll work if you do.

Let’s say the worst happens. You deploy a new set of database changes and the application performance degrades or errors are logged.  The decision is made to revert the entire installation to the previous version.  Because you have carefully designed, tested and refined your rollback strategy, rolling back the database changes becomes a push button operation or a single invocation of a command line tool.  No more running disparate SQL scripts or undoing changes on the fly.  You can be confident that your database has been returned to the same state it was in before the upgrade.

Summary
The database has long been handled with kid gloves and for good reason.  Data is a precious resource for consumers and businesses.  Consumers provide data to businesses and trust that it will be kept safe and used to offer them better products and services.  Businesses rely on data to strategize, grow, and become more efficient and profitable.  It is the lifeblood of our economy.  As our ability to collect and process data becomes greater and greater, the rate at which an enterprise must move on what is learned from that data becomes linearly faster.  Data must be kept safe, but the database must become more agile to accommodate the growing pressure for faster value realization of business intelligence initiatives.  DevOps patterns hold the key to this necessary agility while maintaining or improving the security and integrity of data stores.  The database and DBAs need to be brought into the application design process earlier and must be treated as the first class stakeholder and commodity that they are. Companies that acknowledge this and move to adopt DevOps patterns and include their database teams will be at a distinct competitive advantage to those that don’t.  Don’t miss the boat!

More Stories By Pete Pickerill

Pete Pickerill is Vice President of Products and Co-founder of Datical. Pete is a software industry veteran who has built his career in Austin’s technology sector. Prior to co-founding Datical, he was employee number one at Phurnace Software and helped lead the company to a high profile acquisition by BMC Software, Inc. Pete has spent the majority of his career in successful startups and the companies that acquired them including Loop One (acquired by NeoPost Solutions), WholeSecurity (acquired by Symantec, Inc.) and Phurnace Software.

Latest Stories
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core...
A new definition of Big Data & the practical applications of the defined components & associated technical architecture models This presentation introduces a new definition of Big Data, along with the practical applications of the defined components and associated technical architecture models. In his session at Big Data Expo, Tony Shan will start with looking into the concept of Big Data and tracing back the first definition by Doug Laney, and then he will dive deep into the description of 3V...
SYS-CON Events announced today that Gridstore™, the leader in hyper-converged infrastructure purpose-built to optimize Microsoft workloads, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Gridstore™ is the leader in hyper-converged infrastructure purpose-built for Microsoft workloads and designed to accelerate applications in virtualized environments. Gridstore’s hyper-converged infrastructure is the ...
Cryptography has become one of the most underappreciated, misunderstood components of technology. It’s too easy for salespeople to dismiss concerns with three letters that nobody wants to question. ‘Yes, of course, we use AES.’ But what exactly are you trusting to be the ultimate guardian of your data? Let’s face it – you probably don’t know. An organic, grass-fed Kobe steak is a far cry from a Big Mac, but they’re both beef, right? Not exactly. Crypto is the same way. The US government require...
For years, we’ve relied too heavily on individual network functions or simplistic cloud controllers. However, they are no longer enough for today’s modern cloud data center. Businesses need a comprehensive platform architecture in order to deliver a complete networking suite for IoT environment based on OpenStack. In his session at @ThingsExpo, Dhiraj Sehgal from PLUMgrid will discuss what a holistic networking solution should really entail, and how to build a complete platform that is scalable...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focu...
of cloud, colocation, managed services and disaster recovery solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. TierPoint, LLC, is a leading national provider of information technology and data center services, including cloud, colocation, disaster recovery and managed IT services, with corporate headquarters in St. Louis, MO. TierPoint was formed through the strategic combination of some of t...
SYS-CON Events announced today Sematext Group, Inc., a Brooklyn-based Performance Monitoring and Log Management solution provider, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), search analytics (S...
DevOps is a seismic change for organizations that offers great potential but comes with technical and cultural implications that can be difficult to navigate. In his session at DevOps Summit, Jeremy Steinert, DevOps Services Practice Lead at WSM International, will discuss best practices for successful DevOps deployments that create accountability, ensure secure access controls and create buy-in at all levels. He will include specific examples of successful DevOps deployments and the results a...
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS soluti...
SYS-CON Events announced today that Stratoscale, the new data center operating system, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Based in Herzeliya, Israel, Stratoscale is redefining the data center, developing a hardware-agnostic, software platform hyper-converging compute, storage and networking across the rack or data center. The self-optimizing platform automatically distributes all physical...
When an enterprise builds a hybrid IaaS cloud connecting its data center to one or more public clouds, security is often a major topic along with the other challenges involved. Security is closely intertwined with the networking choices made for the hybrid cloud. Traditional networking approaches for building a hybrid cloud try to kludge together the enterprise infrastructure with the public cloud. Consequently this approach requires risky, deep "surgery" including changes to firewalls, subnets...
SYS-CON Media announced today that Blue Box as launched a popular blog feed on Cloud Computing Journal. Cloud Computing Journal aims to help open the eyes of Enterprise IT professionals to the economics and strategies that utility/cloud computing provides. Blue Box Cloud gives you unequaled agility, without the burden of designing, deploying and managing your own infrastructure. It’s the right choice when public cloud just won’t do. Blue Box Cloud is a managed Private Cloud as a Service (...
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon...
MeriTalk, a public-private partnership focused on improving the outcomes of government IT, today announced the results of its new report, "The Agile Advantage: Can DevOps Move Cloud to the Fast Lane?" The study, underwritten by Accenture Federal Services, reveals that approximately two-thirds of Feds say DevOps will help agencies shift into the cloud fast lane - improving IT collaboration and migration speed. But help is needed, with 66 percent saying that their agency needs to move IT services ...