SYS-CON MEDIA Authors: Liz McMillan, Carmen Gonzalez, Pat Romanski, Elizabeth White, Yeshim Deniz

Blog Feed Post

Intercloud: The Evolution of Global Application Delivery

The concept of an “intercloud” is floating around the tubes and starting to gather some attention. According to Greg Ness you can “Think of the intercloud as an elastic mesh of on demand processing power deployed across multiple data centers. The payoff is massive scale, efficiency and flexibility.”

Basically, the intercloud is the natural evolution of global application delivery. The intercloud is about delivering applications (services) from one of many locations based on a variety of parameters that will be, one assumes, user/organization defined. Some of those parameters could be traditional ones: application availability, performance, or user-location. Others could be more business-focused and based on such tangibles as cost of processing.

Greg, playing off Hoff, explains:

For example, services could be delivered from one location over another because of short term differentials in power and/or labor costs. It would also give enterprises more viable options for dealing with localized tax or regulatory changes.

The intercloud doesn’t yet exist, however---It has at least one missing piece: the automation of manual tasks at the core of the network. The intercloud requires automation of network services, the arcane collection of manual processes required today to keep networks and applications available.

Until there is network service automation, all intercloud bets are off.

What I find eminently exciting about the intercloud concept is that it requires a level of intelligence, of contextual awareness, that is the contextpurview of application delivery. We’re calling them services again, like we did when SOA was all the rage, but in the end even a service can be considered an application – it’s a self-contained piece of code that executes a specific function for a specific business purpose. If it makes it  easier to grab onto, just call “application delivery” “service delivery” because there really isn’t too much of a difference there. But intercloud requires a bit more awareness than global application delivery; specifically it requires more business and data center specific awareness than we have available.

On the surface intercloud sounds a lot like what we do today in a globally load balanced environment: application services are delivered from the data center that makes the most sense based on variables (context) surrounding the request including the user, the state of the data center, the networks involved, and the applications themselves. Global application delivery decisions are often made based on availability or location, but when the global application delivery infrastructure is able to collaborate with the local application delivery infrastructure the decision making process is able to get a lot more granular. Application performance, network conditions, capacity – all can be considered as part of the decision regarding which data center should service any given request.

I rarely disagree with Greg and, on the surface at least, he is absolutely right in that we need to automate processes before the intercloud can come to fruition. But we are also missing one other piece: the variables that are peculiar to the business and data centers comprising the intercloud and the integration/automation that will allow global application delivery infrastructure to take advantage of those variables in an efficient manner. That data, likely, is assumed in the need to automate as without that data there’s not nearly enough to automate decisions across data centers in the way in which Greg and Hoff expect such systems to do.


WHAT’S DIFFERENT ABOUT INTERCLOUD?
What makes the intercloud differ from today’s global application delivery architectures is the ability to base the data-center decision on intercloud businessy-type (non IT) data. This data is necessary to construct the appropriate rules against which request decision making processes can be evaluated. While global application delivery systems today are capable of understanding a great many variables, there are a few more nascent data points it doesn’t have such as cost to serve up an application (service) or labor costs or a combination of time of day and any other variable.

Don’t get me wrong – an intelligent global application delivery system can be configured with such information today, but it’s a manual process and manual processes don’t scale well. This is why Greg insists (correctly) that automation is the key to the intercloud. Assuming that the cost of power, for example, changes throughout the day and, in fact, might be volatile in general means that manually reconfiguring the global application delivery system would be necessary. That simply wouldn’t be feasible. A system for providing that information – and any other information which would become the basis for request routing across distributed data centers – needs to be constructed and subsequently able to be integrated into the massive management system that will drive the intercloud.

It makes a certain amount of sense, if you think about it, that global application delivery would also need to evolve into something more; capable of context awareness at a higher point of view than local application delivery. Global application delivery will be the foundation for intercloud because it’s already performing the basic function – we just lack the variables and the automation necessary for global application delivery solutions to take the next step and become intercloud controllers.

But they will get there.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
Most modern computer languages embed a lot of metadata in their application. We show how this goldmine of data from a runtime environment like production or staging can be used to increase profits. Adi conceptualized the Crosscode platform after spending over 25 years working for large enterprise companies like HP, Cisco, IBM, UHG and personally experiencing the challenges that prevent companies from quickly making changes to their technology, due to the complexity of their enterprise. An accomp...
Every organization is facing their own Digital Transformation as they attempt to stay ahead of the competition, or worse, just keep up. Each new opportunity, whether embracing machine learning, IoT, or a cloud migration, seems to bring new development, deployment, and management models. The results are more diverse and federated computing models than any time in our history.
Andrew Keys is co-founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereum.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business outcomes to drive data, analytics and technology decisions that underpin an organization's digital transformation strategy. Bill has a very impressive background which includes ...
Most organizations are awash today in data and IT systems, yet they're still struggling mightily to use these invaluable assets to meet the rising demand for new digital solutions and customer experiences that drive innovation and growth. What's lacking are potent and effective ways to rapidly combine together on-premises IT and the numerous commercial clouds that the average organization has in place today into effective new business solutions. New research shows that delivering on multicloud e...
On-premise or off, you have powerful tools available to maximize the value of your infrastructure and you demand more visibility and operational control. Fortunately, data center management tools keep a vigil on memory contestation, power, thermal consumption, server health, and utilization, allowing better control no matter your cloud's shape. In this session, learn how Intel software tools enable real-time monitoring and precise management to lower operational costs and optimize infrastructure...
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
Data center, on-premise, public-cloud, private-cloud, multi-cloud, hybrid-cloud, IoT, AI, edge, SaaS, PaaS... it's an availability, security, performance and integration nightmare even for the best of the best IT experts. Organizations realize the tremendous benefits of everything the digital transformation has to offer. Cloud adoption rates are increasing significantly, and IT budgets are morphing to follow suit. But distributing applications and infrastructure around increases risk, introdu...
DevOps has long focused on reinventing the SDLC (e.g. with CI/CD, ARA, pipeline automation etc.), while reinvention of IT Ops has lagged. However, new approaches like Site Reliability Engineering, Observability, Containerization, Operations Analytics, and ML/AI are driving a resurgence of IT Ops. In this session our expert panel will focus on how these new ideas are [putting the Ops back in DevOps orbringing modern IT Ops to DevOps].
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leadin...
Cloud is the motor for innovation and digital transformation. CIOs will run 25% of total application workloads in the cloud by the end of 2018, based on recent Morgan Stanley report. Having the right enterprise cloud strategy in place, often in a multi cloud environment, also helps companies become a more intelligent business. Companies that master this path have something in common: they create a culture of continuous innovation. In his presentation, Dilipkumar Khandelwal outlined the latest...
On-premise or off, you have powerful tools available to maximize the value of your infrastructure and you demand more visibility and operational control. Fortunately, data center management tools keep a vigil on memory contestation, power, thermal consumption, server health, and utilization, allowing better control no matter your cloud's shape. In this session, learn how Intel software tools enable real-time monitoring and precise management to lower operational costs and optimize infrastructure...
The dream is universal: heuristic driven, global business operations without interruption so that nobody has to wake up at 4am to solve a problem. Building upon Nutanix Acropolis software defined storage, virtualization, and networking platform, Mark will demonstrate business lifecycle automation with freedom of choice and consumption models. Hybrid cloud applications and operations are controllable by the Nutanix Prism control plane with Calm automation, which can weave together the following: ...