SYS-CON MEDIA Authors: Pat Romanski, Gary Arora, Zakia Bouachraoui, Yeshim Deniz, Liz McMillan

Blog Feed Post

VMware vVOLS – More Than Just Individual LUNs?

During the VMware keynote session today there was a minor discussion on the upcoming concept of VMware vVOLS.  Today, a virtual machine sits on a VMFS created on a storage LUN or on an NFS share.  An individual virtual machine consists of many files and in the case of VMFS-based VMs, is sitting on a piece of storage that is potentially shared with other virtual machines.  For a couple of reasons that’s not a great thing; firstly if the array is used to replicate the VMFS (either locally or remotely) then all the VMs within that VMFS get replicated.  That can be wasteful and overly complex to manage.  Second,  from a storage array perspective, the LUN is the lowest level of granularity in terms of performance and QOS and as the storage array has no way to determine the individual contents of the LUN, it can’t prioritise workload by VM.

vVOLS are the answer to overcoming the shared VMFS issue.  The vVOL (a bit like a Hyper-V VHD) becomes the single container for storing the entire contents of a VM, including all metadata associated with it.  Having a lower level of granularity means that a storage array that is vVOL-aware can replicate just that virtual machine and can give it a specific level of performance.

I have no insight into how VMware and the storage vendors intend to implement vVOLS, but I can see two options.

NFS – On NAS shares, a vVOL could simply be a single file with metadata to identify it as a vVOL.  The storage system simply manages this file (being aware it is a VM), providing all the features of prioritised access, replication and so on.  The internal format of the file would determine the VM contents, presumably with some header contents to store metadata and the remainder consisting of pages of data representing FBA blocks of the logical disk, much as a VHD works today. As the VM grows, the file grows.

Block – On block storage arrays, a vVOL can simply be a LUN.  Today, LUNs can be created thin provisioned on most storage arrays, so a vVOL can be created as a thin provisioned LUN at the maximum size permitted by the underlying storage, sitting within a thin pool.  This allows the vVOL to grow as necessary.  QOS can be applied easily to an individual LUN.  However block-based storage has more issues.  Firstly, there is usually a limit to the number of LUNs that may be created on an array and this could be a limiting factor.  Second, LUNs presented over both iSCSI and Fibre Channel use the SCSI protocol referencing a target and a device (LUN), with a limit on the number of devices on each target. Although vSphere 5 allows 256 targets per HBA there is a limit of 256 LUNs per host, far too low to be practical in terms of using a single LUN for each vVOL.  This restriction, and the inherent problems in doing a discover of 1000′s of LUNs using the SCSI protocol, means that as currently defined, one vVOL per LUN won’t work.  This has to be the main area on which the storage vendors are focusing, namely how to overcome the issues of SCSI, which is embedded in iSCSI, FCoE and Fibre Channel.

Options

NFS seems like a simple option to implement.  Perhaps we’ll see that as a first step.  However, remembering that EMC owns VMware, then block is bound to be treated with equal priority.  To make vVOLs work, the storage vendors will have to either fix the SCSI issue with clever discovery and mapping techniques, or come up with a totally new way of interfacing with objects on the array.  One suggestion was to use object-based storage.  Today those platforms use REST protocols over HTTP, which is both unreliable for high-volume I/O and doesn’t easily allow for sub-object updating.  In any case, this would mean throwing out all of the existing IP and investment in current technology, which is not going to happen.

The Architect’s View

vVOLs make complete sense in order to scale virtual machine growth.  However today’s storage protocols cause significant issues in achieving vVOL granularity.  Storage vendors won’t throw out their existing architecture, but will most likely modify their hardware implementations in some way.  Yet again, NFS could serendipitously overtake block as the preferred vVOL platform.

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Read the original blog entry...

Latest Stories
Cloud is the motor for innovation and digital transformation. CIOs will run 25% of total application workloads in the cloud by the end of 2018, based on recent Morgan Stanley report. Having the right enterprise cloud strategy in place, often in a multi cloud environment, also helps companies become a more intelligent business. Companies that master this path have something in common: they create a culture of continuous innovation. In his presentation, Dilipkumar Khandelwal outlined the latest...
The dream is universal: heuristic driven, global business operations without interruption so that nobody has to wake up at 4am to solve a problem. Building upon Nutanix Acropolis software defined storage, virtualization, and networking platform, Mark will demonstrate business lifecycle automation with freedom of choice and consumption models. Hybrid cloud applications and operations are controllable by the Nutanix Prism control plane with Calm automation, which can weave together the following: ...
Andrew Keys is co-founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereum.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Every organization is facing their own Digital Transformation as they attempt to stay ahead of the competition, or worse, just keep up. Each new opportunity, whether embracing machine learning, IoT, or a cloud migration, seems to bring new development, deployment, and management models. The results are more diverse and federated computing models than any time in our history.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business outcomes to drive data, analytics and technology decisions that underpin an organization's digital transformation strategy. Bill has a very impressive background which includes ...
Most modern computer languages embed a lot of metadata in their application. We show how this goldmine of data from a runtime environment like production or staging can be used to increase profits. Adi conceptualized the Crosscode platform after spending over 25 years working for large enterprise companies like HP, Cisco, IBM, UHG and personally experiencing the challenges that prevent companies from quickly making changes to their technology, due to the complexity of their enterprise. An accomp...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions. New research shows that delivering on multicloud e...
On-premise or off, you have powerful tools available to maximize the value of your infrastructure and you demand more visibility and operational control. Fortunately, data center management tools keep a vigil on memory contestation, power, thermal consumption, server health, and utilization, allowing better control no matter your cloud's shape. In this session, learn how Intel software tools enable real-time monitoring and precise management to lower operational costs and optimize infrastructure...
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City. Our Silicon Valley 2019 schedule will showcase 200 keynotes, sessions, general sessions, power panels, and...
Moving to Azure is the path to digital transformation, but not every journey is effective. Organizations that start with a cohesive, well-planned migration strategy can avoid common mistakes and stay a step ahead of the competition. Learn from Atmosera CEO, Jon Thomsen about the opportunities and challenges found in three pivotal phases of the journey to the cloud: Evaluation and Architecting, Migration and Management, and Optimization & Innovation. In each phase, there are distinct insights tha...
Data center, on-premise, public-cloud, private-cloud, multi-cloud, hybrid-cloud, IoT, AI, edge, SaaS, PaaS... it's an availability, security, performance and integration nightmare even for the best of the best IT experts. Organizations realize the tremendous benefits of everything the digital transformation has to offer. Cloud adoption rates are increasing significantly, and IT budgets are morphing to follow suit. But distributing applications and infrastructure around increases risk, introdu...
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].