SYS-CON MEDIA Authors: Pat Romanski, Liz McMillan, Yeshim Deniz, Elizabeth White, Courtney Abud

Blog Feed Post

Further Thoughts on VVOLS

Following on from my first post on VVOLs, I did a little research on the two technology previews that had been presented at this and last year’s VMworld – sessions INF-STO2223 & VSP3205 respectively.  The sessions fill in a few background ideas and seem to have evolved slightly over the past 12 months.  Here are some of the contents.

I/O Demux

The I/O Demultiplexer is a way to simplify connectivity between the VM and storage array.  In the latest presentation, the I/O Demux has been relabelled Protocol Endpoint and should be either SCSI or NFS compliant (I suspect they mean more agnostic).  As I mentioned previously, unless VMware are talking about fundamentally redesigning the SCSI protocol, then for block-storage there still needs to be the concept of initiator and target to represent host and storage.  Both presentations are at pains to point out that the I/O Demux is not a LUN or mount point so exactly what is it?

Capacity Pools

A capacity pool is a logical pooling of storage set up by the storage administrator, from which the VMware admin can create VVOLs.  This means responsibility for pool creation (it’s layout, location, performance) stays with the storage team, but the virtualisation team have the flexibility to allocate VVOLs on demand within that pool of capacity.  In most respects, it seems that a capacity pool is no more than today’s VMFS LUN or an NFS share/mount point, but is consistently named across both protocols.

Vendor Provider

Array communication will be managed by a new Vendor Provider plugin on the storage system.  Previously this was described as a Data Management Extension.  I’m never comfortable about array-based vendor plugins as I think they are usually a kludge to make two incompatible devices work together.  To me the vendor provider already has the smell of an SMI-S provider.  These never get natively implemented and are usually sitting on a management server that has to be available for the storage admin to manage the array.  VMware need to be clear about whether this provider will be native or not, as it only introduces additional complications.  Of course the vendor provider plugin is needed probably because neither SCSI nor NFS protocols could be modified to provide the additional management commands VMware wanted or needed.

Actual Implementations

As I mentioned in my previous article, I can see how VVOLs could be easily implemented on NAS systems.  In fact, Tintri already provide the VVOL features on their arrays today.  I took the time at VMworld to chat to Tintri co-founder, Kieran Harty to get his view on the VVOL technology and how it might affect them.  In his view, VVOLs will take another 18-24 months to fully mature, in which time Tintri already have a lead in the product they are shipping today.  However it’s also true to say that today’s NAS vendors could easily add code that recognises the file comprising a VM.

From a block perspective, the spectre of SCSI LUN count still looms.  I expect the IO Demux is a fix to get around this problem.  VMFS LUNs will be renamed capacity pools and VVOLs will be sub-LUN objects.  Hardware Assisted Locking introduced in vSphere 4.1 enables locking of parts of a VMFS in a much more efficient fashion (i.e. locking the parts of the VMFS that represent a VVOL).  All that’s missing to deliver VVOLs is a way of mapping exactly which VMFS blocks belong to a VVOL and ensuring the host and storage array both know this level of detail.  One issue that still stands out here is in delivering QoS (Quality of Service).  Today a VM can be moved to a VMFS that offers a specific service level in terms of performance and capacity.  As that VMFS is a LUN, then I/O attributes in the array are easily set at the VMFS/LUN level.  This includes I/O processing at the storage port on the storage array.  Command Tag Queuing enables I/O processing to be optimised by reordering the processing of queued I/O requests when there are multiple LUNS on a shared storage port.

However, if a VMFS stays as a LUN and VVOLs are logical subdivisions of a LUN, then somehow additional QoS information needs to be provided to the array in order for it to determine which priority order to process requests.  Today that’s done by LUN, but a more granular approach will be needed.  How will this be achieved?  Will the array simply know the LBA address ranges for each VVOL and use that information?  Even if this is the case, today’s storage arrays will require significant engineering changes to make this work and what’s not clear is how a shared array with non-VMware storage will inter-operate.

As a side note, HP have released a preview of an HP 3PAR system working with VVOL storage.  You can find the video through Calvin Zito’s blog at HP.

The Architect’s View

VVOLs could certainly be a step forward and I’m relishing understanding the full technical details.  The concept is a good thing as it abstracts storage specifics from the virtualisation admin.  At this stage there are still too many unknowns to determine how easy VVOLS will be to implement, however VMware will no doubt make them a mandatory part of vSphere in the future, so we better get used to dealing with them now.

 Related Links

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Read the original blog entry...

Latest Stories
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Lo...
With the new Kubernetes offering, ClearDATA solves one of the largest challenges in healthcare IT around time-to-deployment. Using ClearDATA's Automated Safeguards for Kubernetes, healthcare organizations have access to the container orchestration to dynamically deploy new containers on demand, monitor the health of each container for threats and seamlessly roll back faulty application updates to a previous version, avoid system-wide downtime and ensure secure continuous access to patient data.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
Platform9, the open-source-as-a-service company making cloud infrastructure easy, today announced the general availability of its Managed Kubernetes service, the industry's first infrastructure-agnostic, SaaS-managed offering. Unlike legacy software distribution models, Managed Kubernetes is deployed and managed entirely as a SaaS solution, across on-premises and public cloud infrastructure. The company also introduced Fission, a new, open source, serverless framework built on Kubernetes. These ...
Emil Sayegh is an early pioneer of cloud computing and is recognized as one of the industry's true veterans. A cloud visionary, he is credited with launching and leading the cloud computing and hosting businesses for HP, Rackspace, and Codero. Emil built the Rackspace cloud business while serving as the company's GM of the Cloud Computing Division. Earlier at Rackspace he served as VP of the Product Group and launched the company's private cloud and hosted exchange services. He later moved o...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, will discuss how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galer...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that pro...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Th...