Click here to close now.

SYS-CON MEDIA Authors: Liz McMillan, Carmen Gonzalez, Pat Romanski, AppDynamics Blog, Roger Strukhoff

Related Topics: SDN Journal, Java, Microservices Journal, Linux, Virtualization, Security

SDN Journal: Article

Addressing the Concerns CIOs Have with the SDDC

A Q&A session with CIOs regarding the SDDC

First and foremost you can't have a successful software-defined model if your team still have a hardware-defined mentality. Change is inevitable and whether it's embraced or not it will happen. For experienced CIOs this is not the first time they've experienced this technological and consequently cultural change in IT.

Question 1. Vendors are racing to lead the movement towards a softwaredefined data centre. Where are we up to in this journey, and how far are we from seeing this trend widely adopted?

Considering most organizations have still not fully virtualized or moved towards a true Private Cloud model, SDDC is still in its infancy in terms of mainstream adoption and certainly won't be an overnight process. While typical early adopters are advancing quickly down the software-defined route these are mostly organizations with large scale multi-site data centers who are already mature in terms of their IT processes. Such large scale organizations are not the norm and while the SDDC is certainly on the mindset of senior IT executives, establishing such a model requires several key challenges and tasks.

Typical environments are still characterized by numerous silos, complex & static configurations and partially virtualized initiatives. Isolated component and operational silos need to be replaced with expertise that cover the whole infrastructure so that organizations can focus on defining their business policies. In this instance the converged infrastructure model is ideal as it enables the infrastructure to be managed, maintained and optimized as a single entity by a single silo. Subsequently such environments also need to dramatically rearrange their IT processes to accommodate features such as orchestration, automation, metering and billing as they all have a knock on effect to service delivery, activation and assurance as well as change management and release management procedures. The SDDC necessitates a cultural shift and change to IT as much as a technical one and the latter historically always takes longer. It could still be several years before we really see the SDDC be adopted widely but it's definitely being discussed and planned for the future.

Question 2. Looking at all the components of a data center, which one poses the most challenges to being virtualized and software-defined?

The majority of data center components have experienced considerable technological advancements in past few years. Yet in comparison to networking, compute and hypervisor, storage arrays still haven't seen that many drastic changes beyond new features of auto-tiering, thin-provisioning, deduplication and the introduction of EFDs. Moreover Software Defined's focus is applications and dynamically meeting the changing requirements of an application and service offering. Beyond quality of service monitoring based on IOPS and back-end / front-end processor utilization, there are still considerable limitations with storage arrays in terms of application awareness.

Additionally with automation being integral to a software-defined strategy that can dynamically shift resources based on application requirements, automation technologies within storage arrays are up to now still very limited. While storage features such as dynamic tiering may be automated, they are still not based on real-time metrics and consequently not responsive to real-time requirements.

This leads to the fact that storage itself has moved beyond the array and is now encompassed in numerous forms such as HDD, Flash, PCM and NVRAM etc. each with their own characteristics, benefits and challenges. As of yet the challenge is still to have a software layer that can abstract all of these various formats as a single resource pool. The objective should be that regardless of where these formats reside whether that's within the server, the array cache or the back end of the array, etc., they can still dynamically be shifted across platforms to meet application needs as well as provide resiliency and high availability.

Question 3. Why has there been confusion about how software-defined should be interpreted, and how has this effected the market?

Similar to when the Cloud concept first emerged in the industry, the understanding of the software-defined model quickly became somewhat blurred as marketing departments of traditional infrastructure vendors jumped on the bandwagon. While they were quick to coin the Software-Defined terminology to their offerings, there was little if anything different to their products or product strategy. This led to various misconceptions such as software- defined was just another term for Cloud, if it was virtualized it was software-defined or even more ludicrously that software-defined meant the non-existence or removal of hardware.

To elaborate, all hardware components need software of some kind to function but this does not necessitate them to be software-defined. For example Storage arrays use various software technologies such as replication, snapshotting, auto-tiering and dynamic provisioning. Some storage vendors even have the capability of virtualizing third party vendor arrays behind their own or via appliances and consequently abstracting the storage completely from the hardware whereby an end user is merely looking at a resource pool. But this in itself does not define the array as software defined and herein lies the confusion that some end users face as they struggle to understand the latest trend being directed at them by their C-level execs.

Question 4. The idea of a software-defined data center (virtualizing and automating the entire infrastructure wildly disrupts the make-up of a traditional IT team. How can CIOs handle the inevitable resistance some of their IT employees will make?

First and foremost you can't have a successful software-defined model if your team still have a hardware-defined mentality. Change is inevitable and whether it's embraced or not it will happen. For experienced CIOs this is not the first time they've experienced this technological and consequently cultural change in IT. There was resistance to change from the mainframe team when open systems took off, there was no such thing as a virtualisation team when VMware was first introduced and only now are we seeing Converged infrastructure teams being established despite the CI market being around for more than three years. For the traditional IT teams to accept this change they need to recognize how it will inevitably benefit them.

Market research is unanimous in its conclusion that currently IT administrators are far too busy doing maintenance tasks that involve firefighting "keeping the lights" on exercises. Generally figures point to a 77% mark of overall time spent for IT admin on doing mundane maintenance and routine tasks with very little time spent on innovation, optimization and focus of delivering value to the business. For these teams the software-defined model offers the opportunity to move away from such tasks and free up their time enabling them to be proactive as opposed to reactive. With the benefits of orchestration and automation, IT admin can focus on the things they are trained and specialized in such as delivering performance optimization, understanding application requirements and aligning their services and work to business value.

Question 5. To what extent does a software-defined model negate the need to deploy the public cloud? What effect will this have on the market?

The software defined model shouldn't and most likely won't negate the public cloud, if anything it will make its use case even clearer. The SDDC is a natural evolution of cloud, and particularly the private cloud. The private cloud is all about IT service consumption and delivery of IT services whether this be layered upon converged infrastructure or self assembled infrastructures. Those that have already deployed a private cloud and are also utilizing the public cloud have done so with the understanding and assessment of their data; it's security and most typically it's criticality. The software defined-model introduces a greater level of intelligence via software where application awareness and requirements linked to business service levels are met automatically and dynamically. Here the demand is being dictated by the workload and the software is the enabler to provision the adequate resources for that requirement.

Consequently organizations will have a greater level of flexibility and agility to previous private cloud and even public cloud deployments, thus providing more lucidity in the differentiation between the private and public cloud. Instead of needing to request from a cloud provider permission, the software defined model will provide organizations on-demand access to their data as well as independently dictate the level of security. While this may not completely negate the requirement for a public cloud, it will certainly diminish the immediate benefits and advantages associated with it.

Question 6. For CIOs looking for pure bottom-line incentives they can take to senior management, what is the true value of a software-defined infrastructure?

The true value of a software defined model is that it empowers IT to be a true business enabler. Most business executives still see IT as an expensive overhead as opposed to a business enabler. This is typically because of IT's inability to respond quicker to ever changing service requirements, market trends and new project roll-outs that the business demands. Much of this is caused by the deeply entrenched organizational silos that exist within IT where typical infrastructure deployments can take up to months. While converged infrastructure solutions have gone some way to solving this challenge, the software defined model builds on this by providing further speed and agility to the extent that organizations can encapsulate their business requirements into business delivery processes. In this instance infrastructure management processes become inherently linked to business rules that incorporate compliances, performance metrics and business policies. In turn via automation and orchestration these business rules dynamically drive and provision the infrastructure resources of storage, networking and compute in real time to the necessary workloads as the business demands it.

Question 7. To what extent will a software-defined infrastructure change the way end-users should approach security in the data centre?

A software-defined model will change the way data center security is approached in several ways. Traditional physical data center security architecture is renowned for being inflexible and complex due to its reliance on segmented numbers of dedicated appliances to provide numerous requirements such as load balancing, gateways, firewalls, wire sniffers etc. Within a software-defined model, security can potentially not only be delivered as a flexible and agile service but also as a feature that's built into the architecture. Whether that is based on an approach of security being embedded within the servers, storage or network, a software-defined approach has to take advantage of being able to dynamically distribute security policies and resources that are logically managed and scaled via a single pane.

From a security perspective a SDDC provides immediate benefits. Imagine how simplified it will become when automation can be utilized to restructure infrastructure components that have become vulnerable to security threats? Even the automation of isolating malware infected network end points will drastically simplify typical security procedures but will then consequently need to be planned for differently.

Part of that planning is acknowledging not just the benefits but the new types of risk they inevitably introduce. For example, abstracting the security control plane from the security processing and forwarding planes means that any potential configuration errors or security issues can have far more complex consequences than in the traditional data centre. Furthermore centralizing the architecture ultimately means a greater security threat should that central control be compromised. These are some of the security challenges that organizations will face and there are already movements in the software defined security space to cater for this.

Question 8. Where do you see the software-defined market going over the next couple of years?

The concept of the SDDC is going to gain even more visibility and acceptance within the industry and the technological advances that have already come about with Software-Defined Networking will certainly galvanize this. Vendors that have adopted the software-defined tagline will have to mature their product offerings and roadmaps to fit such a model as growing industry awareness will empower organizations to distinguish between genuine features and marketing hyperbole.

For organizations that have already heavily virtualized and built private clouds the SDDC is the next natural progression. For those that have adopted the converged infrastructure model this transition will be even easier as they will have already put the necessary IT processes and models in place to simplify their infrastructure as a fully automated, centrally managed and optimized baseline from which the SDDC will emanate from. It is fair to say that it won't be a surprise to see a lot of the organizations that embraced the converged infrastructure model to also be the pioneers of a successful SDDC.


The above interview with Archie Hendryx is taken from the May 2014 issue of Information Age: http://www.information-age.com/sites/default/files/May%202014%20OPT.pdf

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

Latest Stories
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
SYS-CON Events announced today that the "First Containers & Microservices Conference" will take place June 9-11, 2015, at the Javits Center in New York City. The “Second Containers & Microservices Conference” will take place November 3-5, 2015, at Santa Clara Convention Center, Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize sup...
The OpenStack cloud operating system includes Trove, a database abstraction layer. Rather than applications connecting directly to a specific type of database, they connect to Trove, which in turn connects to one or more specific databases. One target database is Postgres Plus Cloud Database, which includes its own RESTful API. Trove was originally developed around MySQL, whose interfaces are significantly less complicated than those of the Postgres cloud database. In his session at 16th Cloud...
IoT is still a vague buzzword for many people. In his session at @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, discussed the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. He also discussed how IoT is perceived by investors and how venture capitalist access this space. Other topics discussed were barriers to success, what is new, what is old, and what th...
To manage complex web services with lots of calls to the cloud, many businesses have invested in Application Performance Management (APM) and Network Performance Management (NPM) tools. Together APM and NPM tools are essential aids in improving a business's infrastructure required to support an effective web experience... but they are missing a critical component - Internet visibility. Internet connectivity has always played a role in customer access to web presence, but in the past few years u...
There's Big Data, then there's really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at Big Data Expo®, Hannah Smalltree, Director at Treasure Data, discussed how IoT, Big D...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud en...
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
Information Technology (IT) service providers have historically struggled between the huge capital expenditure and long development cycles of building their own cloud versus the thin margins and limited flexibility of using public retailers such as Amazon Web Services (AWS). The emergence of wholesale cloud, and the technologies that make it possible, is revolutionizing how and by whom enterprise IT is delivered. Wholesale cloud is the game-changing third option between building your own (BYO) c...
Shipping daily, injecting faults, and keeping an extremely high availability "without Ops"? Understand why NoOps does not mean no operations. Agile development methodologies require evolved operations to be successful. In his keynote at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how Microsoft teams who have made huge progress with a DevOps transformation effectively utilize operations staff and how challenges were overcome. Regardless ...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, will cover the union between the two topics and why this is important. He will cover an overview of Immutable Infrastructure then show how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He will end the session with some interesting case study examples.
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.