SYS-CON MEDIA Authors: Zakia Bouachraoui, Liz McMillan, Carmen Gonzalez, Roger Strukhoff, David Linthicum

Related Topics: @CloudExpo, Containers Expo Blog, @DXWorldExpo, SDN Journal

@CloudExpo: Blog Post

IT’s Third Platform By @PlexxiInc [#SDN #BigData]

The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends

With the blurring of technology lines, the rise of competitive companies, and a shift in buying models all before us, it would appear we are at the cusp of ushering in the next era in IT — the Third Platform Era. But as with the other transitions, it is not the technology or the vendors that trigger a change in buying patterns. There must be fundamental shifts in buying behavior driven by business objectives.

The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends: the proliferation of data (read: Big Data) and the need for additional performance and scale. In many regards, the first begets the second. As data becomes more available—via traditional datacenters, and both public and private cloud environments — applications look to use that data, which means the applications themselves have to go through an evolution to account for the scale and performance required.

Scale up or scale out?
When the industry talks about scale, people typically trot out Moore’s Law to explain how capacity doubles every 18 months. Strictly speaking, Moore’s Law is more principle than law, and it was initially applied to the number of transistors in ASICs. It has, however, become a fairly loosely used way to think about performance over time.

Of course, as the need for compute resources has skyrocketed, the key to solving the compute scaling problem wasn’t in creating chips with faster and faster clock rates. Rather, to get big, the compute industry went small through the introduction of multicore processors.

The true path to scaling was found in distribution. By creating many smaller cores, workloads could be distributed and worked on in parallel to create lower compute times. And the ability to push workloads out to large numbers of CPU cores means that the amount of compute power could be scaled up by fanning workloads out. Essentially, this is the premise behind scale-out architectures.

A point that gets lost in all of this was that simply putting a multicore processor on a server didn’t mean that scale automatically came. In fact, if the application itself was not changed, it would run neatly in one of however many cores were on the CPU. To take advantage of this scaling out of compute, the applications themselves had to go through a transformation.

The same story has played itself out on the storage side. The premise behind Big Data applications is that volumes of data can be sharded across a number of nodes so that operations can be scaled out across a larger number of servers, each one handling a fraction of the job and completing their parts in less time. By spreading workloads out across multiple storage servers, the time it takes to fetch data and perform operations drops.

Here again, the applications themselves need to change to take advantage of the new architecture.

What is happening now?
The application space as a whole is essentially going through its own transformation. At its most basic, this means that companies are in the process of rewriting business applications to take advantage of available data and to embrace a new architecture more capable of scaling than before.

Note that an additional property of scaled out applications is that they will tend to be more resilient to failures in the infrastructure. Applications written expressly for scale-out environments tend to be designed not with the goal of eliminating failures but rather making failures transparent to the rest of the applications. By replicating data in multiple places (across servers and across racks, for instance), Big Data applications are less reliant on individual servers or switches.

But the application evolution isn’t just confined to companies with large enterprise applications. An entire industry has emerged to support a transition from multi-tiered applications to the current generation of flat, scaled out applications pioneered by the likes of Facebook and Google. If initiatives like Mesos and Docker are any indication, the future of high-performance applications will exist only in distributed environments, with operating system and toolkit support.

Where is the network in all of this?
Overlooked in the transition is the network. For decades, the network has been built to be intentionally agnostic to what is running on top of it. While there have been intermittent periods where network-aware and application-aware have been bandied about, the majority of networking since its inception has been an exercise in creating connectivity by providing bandwidth uniformly between any endpoints on the network.

The entire premise to scaling out is providing workload capacity in small chunks and then distributing applications across those chunks. In the case of compute, this is done by creating many small cores (or VMs) and then moving the applications to the available compute. In the case of storage, storage and processing capacity is spread across a number of nodes, and application workloads are distributed to free capacity. In each of these cases, small blocks of capacity are created and the application workloads are moved to them.

How should the model for networking evolve? If scalable solutions all have the property that application workloads are moved to where capacity is present, then networking needs to go through some fairly foundational changes.

Networking today is based on a set of pathing algorithms that date back more than 50 years. The whole of networking is based on Shortest Path First algorithms that essentially reduce paths in the network to the set of paths that have the shortest number of hops. There might be hundreds of ways to get from point A to point B, but the network will only use the subset that have the shortest possible path. Technologies like Equal Cost Multi Pathing (ECMP) then load balance traffic across paths with the same number of hops.

If the objective is to identify where there is capacity and push application flows across the least congested links, there will need to be fundamental changes in how networking functions to account for non-equal-cost multi pathing (that is, fanning traffic across all available links).

Next-generation application requirements
If the Third Platform era is characterized by a new generation of applications, understanding what those applications require will determine how infrastructure in support of those applications must evolve.

Third Era applications have the following properties:

  • Horizontally-scaled - Applications will tend to be based on scale-out architectures
  • Agile – With an eye towards facilitating service management, interactions (from provisioning to troubleshooting) will be highly automated—across infrastructure silos
  • Integrated – To achieve the performance and scale required, compute, storage, networking, and the application will all be integrated
  • Resilient – Distributed applications will not be designed for infrastructure uptime but rather for overall application resiliency (fault tolerant, not fault free)
  • Security – With data underpinning many of these applications, security and compliance (along with auditability) will be key
  • These properties will determine how each of compute, storage, and networking must evolve.

Scale-out networking
The network that supports scale-out applications will itself be based on a scale-out architecture. The key property to scale out is less about the ultimate scale and more about the path to scale. If applications scale up by adding application instances (on servers, on VMs, or in containers), then the supporting infrastructure gets activated by enabling additional capacity as needed.

There are two facets here that are important:

Graceful addition of new capacity - Because application capacity will be turned up as needed, the requisite infrastructure capacity must be easy to add. Additional servers should be added without significant re-architecture efforts, storage servers must be added without re-designing the cluster, and network capacity must be added without going through a massive datacenter deployment exercise. For leaf-spine architectures, growth occurs through step-function-like scaling. When the number of access ports requires the addition of a new spine switch, for example, the entire architecture must be revisited and every device re-cabled. This incurs a significantly longer delay than either the compute or storage equivalents. A next-generation network designed with the graceful addition of new capacity in mind would allow for non disruptive capacity additions.

Scale down capacity when it is not neededWhile most of scaling discussions are focused on scaling up, it is equally important to scale down. For instance, if a specific application requires less capacity at certain times or under certain conditions, the supporting infrastructure must be capable of redeploying that capacity to other applications or tenants as it makes sense. Traditional networking using leaf-spine architectures uniformly distributes capacity regardless of application requirements. Next-generation network architectures should be able to dynamically allocate capacity when it is not required. This means leveraging technologies like WDM, which allows capacity to be treated as a fluid resource that can be applied using programmatic controls from a central management point.

It is probably worth adding an element here about how compute and storage will scale out. If a new resource is added in a different physical location, the role of the network is not just to have the requisite capacity but also to make that resource look as if it is physically adjacent to the other resources. Scaling out is not just about capacity then; it is also about providing high-bandwidth, low-latency connectivity such that data locality is a less stringent requirement than otherwise. This means that resources can be across the datacenter, across a metro area, or across a continent.

Agility
Agility, put simply, is about making change faster. The notion that you can plan your architecture years in advance and then allow the system to simply run is just no longer true. When applications and the data they use are dynamic, change is a certainty. The question is: how do you deal with that change?

There are two ways to deal with change: automate what you can, and make everything else easier.

The currency of automation is data. To be automated, data must be shared between systems in a programmatic way. Automation is, in many ways, a byproduct of effective integration, but integration is not by itself the entire story. To automate, you must understand workflows—how things are actually done. This is an exercise in understanding how information is layered based on frame of reference. When there is an issue related to a web server, automation is more about collecting all data related to that web server across all infrastructure than taking some action. The challenge in automating things is knowing what action to take, not reducing the keystrokes it takes to execute the command.

It is impossible to automate everything, so the remaining elements of the network must be flexible and more wieldy than we have become accustomed to in networking. This means simplifying architectures so that we are deploying fewer devices and ports. It means reducing the number of control points in the network so we have fewer places to go to make changes. And it means implicitly handling behaviors that have traditionally been managed through pinpoint control over thousands of configuration knobs.

Integrated infrastructure
The future of IT infrastructure is based not on silos of compute, storage, and networking capacity but on the various subsystems working together to deliver application workloads. Accordingly, solutions that service the Third Platform era will either be tightly-integrated solutions from a single vendor, or collections of components that are explicitly designed to be integrated.

In the case of the former, the concern for customers is the pricing impacts of an integrated solution. Vertical stacks are inherently more difficult to displace, which means that incumbency creates a strong barrier to adoption, which will have the tendency to drive pricing higher (noting that there is already a lot of pressure to push pricing down).

In the case of the latter, the integration will need to be more than testing things alongside each other. This is not about the coexistence of equipment but rather the intelligent interaction of devices from different classes of vendor. From a Plexxi perspective, this is why efforts like the Data Services Engine (DSE) are important. They provide a means of integrating, but more importantly, they provide a framework that is easily extensible to other infrastructure so that the future is always integratable. Additionally, this integration layer is open source, so the likelihood of lock-in is significantly lower.

Resilience
The next-generation platform is resilient. Rather than designing for correctness and relying on a never-ending battery of tests to ensure efficacy, infrastructure is constructed using building blocks that are themselves resilient to failures and tolerant to issues in either the hardware or the software. From a compute perspective, this means having the ability to fail to other application instances or containers. For storage, this means replicating data across multiple servers in multiple racks. For networking, this is all about having multiple paths between endpoints so that if one goes down, resources are not stranded.

With resiliency built in via Plexxi’s inherent optical path diversity, the emphasis shifts to failure detection and failover. Path pre-calculation is a major player in making sure that failover and convergence times stay low.

Over time, resilience will include pushes into DevOps-style deployment models where the infrastructure is treated as a single system image that is qualified before changes are deployed. This will require integration with DevOps tools—not just tools like Chef and Ansible but also tools like Jenkins and Git.

Security
Security is about keeping data secure, not just keeping equipment secure. This means that traffic must be isolated where necessary, auditable when required, and ultimately managed as a collection of tenant and application flows that can be treated individually as their payloads require. To get differentiated service, there will need to be policy abstraction that dictates the workload requirements for individual tenants and applications. For instance, if a workload requires special treatment, it can be flagged and redirected as needed.

The post The requirements for IT’s Third Platform appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Latest Stories
Moroccanoil®, the global leader in oil-infused beauty, is thrilled to announce the NEW Moroccanoil Color Depositing Masks, a collection of dual-benefit hair masks that deposit pure pigments while providing the treatment benefits of a deep conditioning mask. The collection consists of seven curated shades for commitment-free, beautifully-colored hair that looks and feels healthy.
The textured-hair category is inarguably the hottest in the haircare space today. This has been driven by the proliferation of founder brands started by curly and coily consumers and savvy consumers who increasingly want products specifically for their texture type. This trend is underscored by the latest insights from NaturallyCurly's 2018 TextureTrends report, released today. According to the 2018 TextureTrends Report, more than 80 percent of women with curly and coily hair say they purcha...
The textured-hair category is inarguably the hottest in the haircare space today. This has been driven by the proliferation of founder brands started by curly and coily consumers and savvy consumers who increasingly want products specifically for their texture type. This trend is underscored by the latest insights from NaturallyCurly's 2018 TextureTrends report, released today. According to the 2018 TextureTrends Report, more than 80 percent of women with curly and coily hair say they purcha...
We all love the many benefits of natural plant oils, used as a deap treatment before shampooing, at home or at the beach, but is there an all-in-one solution for everyday intensive nutrition and modern styling?I am passionate about the benefits of natural extracts with tried-and-tested results, which I have used to develop my own brand (lemon for its acid ph, wheat germ for its fortifying action…). I wanted a product which combined caring and styling effects, and which could be used after shampo...
The precious oil is extracted from the seeds of prickly pear cactus plant. After taking out the seeds from the fruits, they are adequately dried and then cold pressed to obtain the oil. Indeed, the prickly seed oil is quite expensive. Well, that is understandable when you consider the fact that the seeds are really tiny and each seed contain only about 5% of oil in it at most, plus the seeds are usually handpicked from the fruits. This means it will take tons of these seeds to produce just one b...
Steaz, the nation's top-selling organic and fair trade green-tea-based beverage company, announces its 2017 "Mind. Body. Soul." tour, which will bring authentic experiences inspired by the brand's signature Mind. Body. Soul. tagline to life across the country. The tour will inform, educate, inspire and entertain through events, digital activations and partner-curated experiences developed to support the three pillars of complete health and wellness.
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected pat...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
ScaleMP is presenting at CloudEXPO 2019, held June 24-26 in Santa Clara, and we’d love to see you there. At the conference, we’ll demonstrate how ScaleMP is solving one of the most vexing challenges for cloud — memory cost and limit of scale — and how our innovative vSMP MemoryONE solution provides affordable larger server memory for the private and public cloud. Please visit us at Booth No. 519 to connect with our experts and learn more about vSMP MemoryONE and how it is already serving some of...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Codete accelerates their clients growth through technological expertise and experience. Codite team works with organizations to meet the challenges that digitalization presents. Their clients include digital start-ups as well as established enterprises in the IT industry. To stay competitive in a highly innovative IT industry, strong R&D departments and bold spin-off initiatives is a must. Codete Data Science and Software Architects teams help corporate clients to stay up to date with the mod...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
Platform9, the leader in SaaS-managed hybrid cloud, has announced it will present five sessions at four upcoming industry conferences in June: BCS in London, DevOpsCon in Berlin, HPE Discover and Cloud Computing Expo 2019.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
When you're operating multiple services in production, building out forensics tools such as monitoring and observability becomes essential. Unfortunately, it is a real challenge balancing priorities between building new features and tools to help pinpoint root causes. Linkerd provides many of the tools you need to tame the chaos of operating microservices in a cloud native world. Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. I...