Click here to close now.

SYS-CON MEDIA Authors: Liz McMillan, AppDynamics Blog, David Sprott, tru welu, Blue Box Blog

Related Topics: Cloud Security, Java IoT, @MicroservicesE Blog, Linux Containers, @ContainersExpo Blog, CloudExpo® Blog, BigDataExpo® Blog, SDN Journal, ThingsExpo® Blog

Cloud Security: Article

ARM Server to Transform Cloud and Big Data to "Internet of Things"

New Microserver computing platform offers compelling benefits for the right applications

A completely new computing platform is on the horizon. They're called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general.

What Is a Microserver...and What Isn't
Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some years to come - growing to over 20% of the server market by 2016 according to Oppenheimer ("Cloudy With A Chance of ARM" Oppenheimer Equity Research Industry Report).

According to Chris Piedmonte, CEO of Suvola Corporation - a software and services company focused on creating preconfigured and scalable Microserver appliances for deploying large-scale enterprise applications, "the Microserver market is poised to grow by leaps and bounds - because companies can leverage this kind of technology to deploy systems that offer 400% better cost-performance at half the total cost of ownership. These organizations will also benefit from the superior reliability, reduced space and power requirements, and lower cost of entry provided by Microserver platforms".

This technology might be poised to grown, but today, these Microservers aren't mainstream at all - having well under 1% of the server market. Few people know about them. And there is a fair amount of confusion in the marketplace. There isn't even agreement on what to call them: different people call them different things - Microserver, ARM Server, ARM-based Server and who knows what else.

To further confuse the issue, there are a number of products out there in the market that are called "Microservers" that aren't Microservers at all - for example the HP ProLiant MicroServer or the HP Moonshoot chassis. These products are smaller and use less power than traditional servers, but they are just a slightly different flavor of standard Intel/AMD server that we are all familiar with. Useful, but not at all revolutionary - and with a name that causes unfortunate confusion in the marketplace.

Specifically, a Microserver is a server that is based on "system-on-a-chip" (SoC) technology - where the CPU, memory and system I/O and such are all one single chip - not multiple components on a system board (or even multiple boards).

What Makes ARM Servers Revolutionary?
ARM Servers are an entirely new generation of server computing - and they will make serious inroads into the enterprise in the next few years. A serious innovation - revolutionary, not evolutionary.

These new ARM Server computing platforms are an entire system - multiple CPU cores, memory controllers, input/output controllers for SATA, USB, PCIe and others, high-speed network interconnect switches, etc. - all on a SINGLE chip measuring only one square inch. This is hyperscale integration technology at work.

To help put this into context, you can fit 72 quad-core ARM Servers into the space used by a single traditional server board.

Today's traditional server racks are typically packed with boards based on Intel XEON or AMD Opteron chips and are made up of a myriad of discrete components. They're expensive, powerful, power-hungry, use up a considerable amount of space, and can quickly heat up a room to the point where you might think you're in a sauna.

In contrast, the ARM Servers with their SoC design are small, very energy efficient, reliable, scalable - and incredibly well-suited for a wide variety of mainstream computing tasks dealing with large numbers of users, data and applications (like Web services, data crunching, media streaming, etc.). The SoC approach of putting an entire system on a chip, results in a computer that can operate on as little as 1.5 watts of power.

Add in memory and a solid-state "disk drive" and you could have an entire server that runs on under 10 watts of power. For example, Calxeda's ECX-1000 quad-core ARM Server node with built-in Ethernet and SATA controllers, and 4GB of memory uses 5 watts at full power. In comparison, my iPhone charger is 7 watts and the power supply for the PC on my desk is 650 watts (perhaps that explains the $428 electric bill I got last month).

ARM Server Microserver

Realistically, these ARM Servers use about 1/10th the power, and occupy considerably less than 1/10th the space of traditional rack-mounted servers (for systems of equivalent computing power). And at an acquisition price of about half of what a traditional system costs.

And they are designed to scale - the Calxeda ECX-1000 ARM Servers are packaged up into "Energy Cards" - composed of four quad-core chips and 16 SATA ports. They are designed with scalability in mind - they embed an 80 gigabit per second interconnect switch, which allows you to easily connect potentially thousands of nodes without all the cabling inherent in traditional rack-mounted systems (a large Intel-based system could have upwards of 2,000 cables). This also provides for extreme performance - node to node communication occurs on the order of 200 nanoseconds.

You can have four complete ARM Servers on a board that is only ten inches long and uses only about 20 watts of power at full speed - that's revolutionary.

How Do ARM Servers Translate into Business Benefits?
When you account for reduced computing center operations costs, lower acquisition costs, increased reliability due to simpler construction / fewer parts, and less administrative cost as a result of fewer cables and components, we're talking about systems that could easily cost 70% less to own and operate.

If you toss in the cost to actually BUILD the computing center and not just "operate it", then the cost advantage is even larger. That's compelling - especially to larger companies that spend millions of dollars a year building and operating computing centers. Facebook, for example, has been spending about half a billion (yes, with a "b") dollars a year lately building and equipping their computing centers. Mobile devices are driving massive spending in this area - and in many cases, these are applications which are ideal for ARM Server architectures.

Why Don't I See More ARM Servers?
So - if all this is true, why do Microservers have such a negligible market share of the Server market?

My enthusiasm for ARM Servers is in their potential. This is still an early-stage technology and Microserver hardware really has only been available since the last half of 2012. I doubt any companies are going to trade in all their traditional rack servers for Microservers this month. The "eco-system" for ARM Servers isn't fully developed yet. And ARM Servers aren't the answer to every computing problem - the hardware has some limitations (it's 32 bit, at least for now). And it's a platform better suited for some classes of computing than others. Oh, and although it runs various flavors of Linux, it doesn't run Windows - whether that is a disadvantage depends on your individual perspective.

Microservers in Your Future?
Irrespective of these temporary shortcomings, make no mistake - this is a revolutionary shift in the way that server systems will be (and should be) designed. Although you personally may never own one of these systems, within the next couple of years, you will make use of ARM Servers all the time - as they have the potential to shrink the cost of Cloud Computing, "Big Data", media streaming and any kind of Web computing services to a fraction of the cost of what they are today.

Keep your eye on this little technology - it's going to be big.


Note: The author of this article works for Dell. The opinions stated are his own personal opinions vs. those of his employer.

Latest Stories
In her General Session at 15th Cloud Expo, Anne Plese, Senior Consultant, Cloud Product Marketing, at Verizon Enterprise, focused on finding the right mix of renting vs. buying Oracle capacity to scale to meet business demands, and offer validated Oracle database TCO models for Oracle development and testing environments. Anne Plese is a marketing and technology enthusiast/realist with over 19+ years in high tech. At Verizon Enterprise, she focuses on driving growth for the Verizon Cloud platfo...
Andi Mann has been serving as Conference Chair of the DevOps Summit since its inception. He is one of the world's recognized leaders in DevOps, and continues to be one of its most articulate advocates. Here are some recent thoughts of his in an interview we conducted in the run-up to the DevOps Summit to be held June 9-11 at the Javits Center in New York City. When did you first start thinking about DevOps and its potential impact on enterprise IT? Andi: I first started thinking about DevOps b...
The most often asked question post-DevOps introduction is: “How do I get started?” There’s plenty of information on why DevOps is valid and important, but many managers still struggle with simple basics for how to initiate a DevOps program in their business. They struggle with issues related to current organizational inertia, the lack of experience on Continuous Integration/Delivery, understanding where DevOps will affect revenue and budget, etc. In their session at DevOps Summit, JP Morgentha...
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
How does one bridge the gap between traditional enterprise storage infrastructures and the private, hybrid, and public cloud? In his session at 15th Cloud Expo, Dan Pollack, Chief Architect of Storage Operations at AOL Inc., examed the workload differences and required changes to reuse existing knowledge and components when building and using a cloud infrastructure. He also looked into the operational considerations, tool requirements, and behavioral changes required for private cloud storage s...
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her Day 2 Keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, discussed...
The speed of product development has increased massively in the past 10 years. At the same time our formal secure development and SDL methodologies have fallen behind. This forces product developers to choose between rapid release times and security. In his session at DevOps Summit, Michael Murray, Director of Cyber Security Consulting and Assessment at GE Healthcare, examined the problems and presented some solutions for moving security into the DevOps lifecycle to ensure that we get fast AND ...
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
The OpenStack cloud operating system includes Trove, a database abstraction layer. Rather than applications connecting directly to a specific type of database, they connect to Trove, which in turn connects to one or more specific databases. One target database is Postgres Plus Cloud Database, which includes its own RESTful API. Trove was originally developed around MySQL, whose interfaces are significantly less complicated than those of the Postgres cloud database. In his session at 16th Cloud...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In this session, James Kirkland, Red Hat's Chief Architect for the ...
Working with Big Data is challenging, especially when decision makers depend on market insights and intelligence from your data but don't have quick access to it or find it unusable. In their session at 6th Big Data Expo, Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia; Zel Bianco, President, CEO and Co-Founder of Interactive Edge of Solgenia; and Ermanno Bonifazi, CEO & Founder at Solgenia, discussed how a revolutionary cloud-based BI along with mobile analytics is already c...
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager – Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, will review next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discuss how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Speaker Bios Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has b...
While there are hundreds of public and private cloud hosting providers to choose from, not all clouds are created equal. If you’re seeking to host enterprise-level mission-critical applications, where Cloud Security is a primary concern, WHOA.com is setting new standards for cloud hosting, and has established itself as a major contender in the marketplace. We are constantly seeking ways to innovate and leverage state-of-the-art technologies. In his session at 16th Cloud Expo, Mike Rivera, Seni...
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore's Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at Big Data Expo, Mason Katz, CTO and co-founder of StackIQ, disc...
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming on...