SYS-CON MEDIA Authors: Pat Romanski, Gary Arora, Zakia Bouachraoui, Yeshim Deniz, Liz McMillan

Blog Feed Post

Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel

This is one of a series of posts discussing the new features in Windows Server 2012, now shipping and previously in public beta as Windows Server 8.  You can find references to other related posts at the end of this article.  This post reviews the new Hyper-V 3.0 feature, Virtual Fibre Channel.

Background

Virtual Fibre Channel (VFC) enables a Hyper-V guest to access the physical storage HBAs (host bus adaptors) installed in the Hyper-V server.  Normally, storage adaptors would be reserved for the use of the Hyper-V guest itself however this new feature acts as a passthrough, enabling any Hyper-V 3.0 guest (at the right O/S level) to access the HBAs and so connect directly to fibre channel storage devices.

VFC is implemented through the use of NPIV, or N_Port ID virtualisation.  This a fibre channel standard that permits a single HBA to act as multiple nodes within a SAN environment.  Normally, a single HBA connects to the SAN and presents a physical ID known as a World Wide Port Name or WWPN.  This deals with the physical connectivity of the fabric.  At the same time, the connecting server or storage device presents a node name ID or WWNN (World Wide Node Name).  A WWNN can be unique per adaptor as is the case with most host-based HBAs or can be a single node representing an entire device such as a storage array.  NPIV allows a single physical adaptor to present multiple node names to the fabric and so effectively “virtualise” the physical device.  Each new node also has to have virtual WWPNs in order to adhere with fibre channel standards.

The benefits of being able to use NPIV to virtualise an HBA is that each guest in a Hyper-V environment can be assigned its own WWNN and so have a direct connection to the SAN.  It may not be immediately obvious how this helps when virtual server infrastructure is supposed to abstract the physical layer but there are a number of distinct advantages in zoning storage devices in this way:

  • Zoning can be done to the individual guest and is therefore more secure (albeit that it still goes through the hypervisor)
  • Tape drives can be supported, so backup software can write directly to devices
  • Storage that requires failover, snapshots and other SCSI based functionality can be directly supported, especially where non-standard SCSI commands are used

Implementation

VFC is configured in Hyper-V Manager using the new Virtual SAN Manager option (see the screenshots).  Only HBAs and firmware that support NPIV can be used for VFC.  This means newer HBAs only, for example Emulex HBAs at speeds of 4Gb/s and above.  Obviously the SAN fabric needs to support NPIV too.  An HBA can only be attributed to one virtual SAN, however a virtual SAN can contain multiple HBAs.  Once the virtual SAN is created, a virtual HBA can be assigned to a guest using the Add Hardware section under Settings.  Fibre channel IDs can be set as any 16-digit hexadecimal number, although it’s not advisable to use values that are already reserved out for vendors.  Microsoft defaults to some standard values, which can be auto-generated to new values through the “Create Addresses” button.  As yet I’ve not worked out why there are two sets of addresses as only the first appears to be visible on the fabric.

As soon as a guest is started, the fabric login process begins, even if no guest O/S has been installed.  As you can see from screenshot 4, the additional node indicates the source Hyper-V server (in this case PH03) but doesn’t pass through the guest name, indicating it only as “Hyper-V VM Port”.  It would be a nice update to be able to see the VM name there.

Using VFC within the Hyper-V guest requires two things; a supported O/S – one of Windows Server 2008, Windows 2008 R2 or Windows 2012 – plus the installation of the latest Integration Services update that comes with Windows Server 2012.  This means that the virtual fibre channel adaptor is not emulated as a native device and so can’t be used with other operating systems like Linux (more on this later).  The fifth screenshot shows the emulated HBA controller and tape drive I presented to the host.  One question that seems to have been discussed on a number of blogs is the support for tape drives.  I can confirm tape drives do work, but can’t see any documentation from Microsoft to say whether they are officially supported.

Performance

I chose a tape drive as this is a good way of demonstrating performance.  Deploying Backup Exec 2012 onto my Windows 2008 R2 guest, writing to an LTO2 drive, I achieved around 12MB/s, better than I’ve managed with an emulated drive through vSphere 5.0.  This is well under the spec of the drive itself (max 40MB/s) but is certainly usable in small environments.  More testing is needed here I think, as there appeared to be little overhead on the Hyper-V server to manage the data passthrough.

The Architects View

Virtual Fibre Channel is a great feature for providing native SAN device support.  However there are few restrictions on use, most notably on the need to have latest hardware and be using Microsoft platforms.  I haven’t yet seen any best practices for using VFC; for example should HBAs be placed in a single virtual SAN or should multiple ones be configured for failover; these are questions that need to be answered.  VFC could be massively improved on two fronts; firstly drivers could be provided for other platforms, especially Linux installations.  Second, if vendors were able to write code using the virtual device, then virtual SAN appliances (VSA) could use fibre channel rather than be reliant on iSCSI as they are today.

One final comment; Microsoft are doing a poor job of providing detail on these new storage features.  There is precious little to find, other than high-level blog information and as mentioned previously, no best practice documentation that I can locate.  I’d be happy to be pointed in the direction of anything useful and I will link it from this post.

Related Links

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration. Screenshot 5 Screenshot 4 Screenshot 3 Screenshot 2 Screenshot 1

Read the original blog entry...

Latest Stories
Cloud is the motor for innovation and digital transformation. CIOs will run 25% of total application workloads in the cloud by the end of 2018, based on recent Morgan Stanley report. Having the right enterprise cloud strategy in place, often in a multi cloud environment, also helps companies become a more intelligent business. Companies that master this path have something in common: they create a culture of continuous innovation. In his presentation, Dilipkumar Khandelwal outlined the latest...
The dream is universal: heuristic driven, global business operations without interruption so that nobody has to wake up at 4am to solve a problem. Building upon Nutanix Acropolis software defined storage, virtualization, and networking platform, Mark will demonstrate business lifecycle automation with freedom of choice and consumption models. Hybrid cloud applications and operations are controllable by the Nutanix Prism control plane with Calm automation, which can weave together the following: ...
Every organization is facing their own Digital Transformation as they attempt to stay ahead of the competition, or worse, just keep up. Each new opportunity, whether embracing machine learning, IoT, or a cloud migration, seems to bring new development, deployment, and management models. The results are more diverse and federated computing models than any time in our history.
Andrew Keys is co-founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereum.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business outcomes to drive data, analytics and technology decisions that underpin an organization's digital transformation strategy. Bill has a very impressive background which includes ...
Most modern computer languages embed a lot of metadata in their application. We show how this goldmine of data from a runtime environment like production or staging can be used to increase profits. Adi conceptualized the Crosscode platform after spending over 25 years working for large enterprise companies like HP, Cisco, IBM, UHG and personally experiencing the challenges that prevent companies from quickly making changes to their technology, due to the complexity of their enterprise. An accomp...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
On-premise or off, you have powerful tools available to maximize the value of your infrastructure and you demand more visibility and operational control. Fortunately, data center management tools keep a vigil on memory contestation, power, thermal consumption, server health, and utilization, allowing better control no matter your cloud's shape. In this session, learn how Intel software tools enable real-time monitoring and precise management to lower operational costs and optimize infrastructure...
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions. New research shows that delivering on multicloud e...
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City. Our Silicon Valley 2019 schedule will showcase 200 keynotes, sessions, general sessions, power panels, and...
Moving to Azure is the path to digital transformation, but not every journey is effective. Organizations that start with a cohesive, well-planned migration strategy can avoid common mistakes and stay a step ahead of the competition. Learn from Atmosera CEO, Jon Thomsen about the opportunities and challenges found in three pivotal phases of the journey to the cloud: Evaluation and Architecting, Migration and Management, and Optimization & Innovation. In each phase, there are distinct insights tha...
Data center, on-premise, public-cloud, private-cloud, multi-cloud, hybrid-cloud, IoT, AI, edge, SaaS, PaaS... it's an availability, security, performance and integration nightmare even for the best of the best IT experts. Organizations realize the tremendous benefits of everything the digital transformation has to offer. Cloud adoption rates are increasing significantly, and IT budgets are morphing to follow suit. But distributing applications and infrastructure around increases risk, introdu...
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].