SYS-CON MEDIA Authors: Stackify Blog, Zakia Bouachraoui, Elizabeth White, Pat Romanski, Liz McMillan

Blog Feed Post

HUS VM – Hitachi’s New Midrange Baby

The acquisition of BlueArc by Hitachi/HDS just over 12 months ago enabled the company to start producing more integrated unified products, which have been branded under the name HUS (Hitachi Unified Storage).  The latest announcement, made on 25th September 2012 (and discussed at the HDS Bloggers’ Day) is a new platform taking the HUS NAS component (i.e. BlueArc) and using a cut-down VSP to deliver mid-range unified platform that can also use external storage resources.

Background

Hitachi’s previous high-end array, the USP-V also had a baby brother known as the USP-VM.  This was a cut down deployment of the USP in a class that HDS referred to as “enterprise modular”.  Prior to that, Hitachi offered the NSC55, which at the outset appeared to be a pure virtualisation solution but was actually shipped with disk.  These stripped down versions of enterprise arrays seem to fit a strange category.  They are probably not best deployed by enterprise customers as they don’t have the same scalability and the midrange products (AMS2000 series and now HUS) were probably cheaper and so more appropriate for mid-range customers.  However the benefit of using the enterprise technology is that of external storage virtualisation, which forms a key piece of Hitachi’s storage strategy.  The USP-VM, which was rack-mounted using standard power supplies, provided the same features as the USP but without the implications of deploying three-phase power and enterprise-class data centre facilities.

Enter HUS VM

The new clumsily-named HUS VM platform is the evolution in the stripped down enterprise option.  It offers the same features and functionality as a VSP (fitting what I’ve heard described as Tier 1.5) and can be combined with the BlueArc NAS blade to provide a unified protocols storage appliance that can also virtualise external storage arrays.  Without a doubt the benefits to customers are ones of ease of deployment and cost.  Having compatibility with the enterprise range of products provides enterprise customers the ability to deploy HUS VM in branch environments  and still maintain data replication and management tool compatibility.  All of this comes with the goodness of being able to plant some of the physical storage on an external array.

Custom Hardware

HUS VM Architecture

What’s not so obvious from the announcement of this new platform is the engineering that has been put into it.  The VSP architecture introduced custom ASICs that offloaded front and back-end director processing (FED/BED) to Virtual Storage Processors, making the overall design much more efficient, as it reduced the risk of over or under-utilised directors.  With HUS VM, Hitachi have collapsed the functionality of seven existing ASICs & processor chips into one custom ASIC.  This provides the benefits of reduced cost, reduced risk in component failure (there’s less to go wrong or to require replacement) and of course reductions in environmental requirements.

Where Next?

The new HUS VM design reminds me of another Hitachi platform, the previous block-based AMS range of arrays.  In fact, the similarity made me question the ultimate aim of producing this new platform; will Hitachi simply drop the old AMS block-platform in favour of the HUS VM in the future? It makes sense not to run multiple products and if one product can be made to hit different market segments and their price categories, then it makes sense not to run two product lines and harmonise them into a single device.

One point worth considering; HP also resell the VSP as the P9500.  This is identical hardware, with perhaps a few microcode tweaks, but they are essentially the same base product.  Will HUS VM be available to HP customers or is this an HDS-developed platform only?

The Architect’s View

Hitachi are aiming the HUS VM at the mid-range “Tier 1.5″ market while retaining all of the goodness of the VSP.  The idea of retaining some features in silicon perhaps shows a trend towards optimising those functions that can be done with commodity hardware while retaining ASICs only where necessary.  We could be witnessing the consolidation of their block-storage platforms too.  There’s more to discuss around how NAS is implemented in this architecture, what the BlueArc acquisition could really mean for true integration, all of which will have to wait for next time.

Disclaimer: I recently attended the Hitachi Bloggers’ Day 2012.  My flights and accommodation were covered by Hitachi during the trip, however there is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time when attending the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Related Links

 

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Read the original blog entry...

Latest Stories
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform....
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust tha...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. This...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Public clouds dominate IT conversations but the next phase of cloud evolutions are "multi" hybrid cloud environments. The winners in the cloud services industry will be those organizations that understand how to leverage these technologies as complete service solutions for specific customer verticals. In turn, both business and IT actors throughout the enterprise will need to increase their engagement with multi-cloud deployments today while planning a technology strategy that will constitute a ...
GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Lo...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Lo...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Signs of a shift in the usage of public clouds are everywhere. Previously, as organizations outgrew old IT methods, the natural answer was to try the public cloud approach; however, the public platform alone is not a complete solution. Complaints include unpredictable/escalating costs and mounting security concerns in the public cloud. Ultimately, public cloud adoption can ultimately mean a shift of IT pains instead of a resolution. That's why the move to hybrid, custom, and multi-cloud will ...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infra...