SYS-CON MEDIA Authors: Stackify Blog, Zakia Bouachraoui, Elizabeth White, Pat Romanski, Liz McMillan

Blog Feed Post

Achieving Command and Control through Requests and Responses

I’ve been talking about layer 7 load balancing (L7 LB) since, well, for a long time. From the first inception of it back in the day when someone decided that routing requests using URIs and host headers was a pretty innovative thing to do. bifurcated-network-stateful-statelessIf you must know, that was back in 2001.

And it was innovative then. Because at the time, load balancing and routing was something addressed at layers 3 and 4 – that’s TCP/IP – in the network using routers and switches and load balancers and network architecture.  You didn’t commonly see network devices operating at L7. You just didn’t, except in the app infrastructure.  

Today you see it all the time in the network. That virtual server definition in httpd.conf that relies on HTTP host header? That’s part of L7 LB. Rewriting URLs? Part of L7 LB. Persistent (sticky) sessions? You got this, right? Right. L7 LB.

So basically I’ve spent most of this century preaching about L7 LB.

One Monday morning in May I was reading the Internet (cause I do that on Mondays) and came across a lengthy discussion of microservices and L7 LB.

Guys, I must tell you I was totally excited by this blog. I was excited by the content, by the focus on the role of L7 LB in microservices and emerging app architectures (he mentions canary deployments), and by the words the author used to seamlessly move what has been a traditionally network-focused technology into an ops-focused technology. This is, without a doubt, one of the best (and most concise) descriptions of L7 LB I’ve read on the Internet:

It’s this experience that motivated linkerd (pronounced “linker dee”), a proxy designed to give service operators command & control over traffic between services. This encompasses a variety of features including transport security, load balancing, multiplexing, timeouts, retries, and routing.

In this post, I’ll discuss linkerd’s approach to routing. Classically, routing is one of the problems that is addressed at Layers 3 and 4—TCP/IP—with hardware load balancers, BGP, DNS, iptables, etc. While these tools still have a place in the world, they’re difficult to extend to modern multi-service software systems. Instead of operating on connections and packets, we want to operate on requests and responses. Instead of IP addressees and ports, we want to operate on services and instances.

I highlighted that one part because man, there’s just so much wrapped up in that single statement I can’t even. Literally.

The concept of operating on requests and responses is the foundation of entire solution sets across security, scale, and performance. A proxy capable of inspecting requests and responses is able to note only deal with transport security (TLS/SSL offload) and load balancing, but app security, as well. Request and response inspection is a critical component of app security, scanning and scrubbing of content deep down in the payload (the JSON, the HTML, the XML) to find exploits and malicious content is the premise of a web application firewall.

And then there’s access control, which increasingly cannot simply rely on IP addresses and user names. The proliferation of cloud and roaming, mobile employees and users alike means a greater focus on controlling access to applications based on context. Which means operating on requests and being able to extract a variety of information from it that will provide richer access policies able to cross the chasm from users to things (devices).

And of course there’s scale. Scale today is not about load balancing algorithms, it’s about architecture. Application and operational architecture alike. The use of DevOps-driven deployment patterns like canary and blue-green deployments as well as sharding and partitioning architectures are critical to achieving not just the seamless scale required today but the efficacy of those architectures. L7 LB is key to these endeavors, enabling fine-grained control over the routing of requests and handling of responses between apps (micro or monolith) and users (thing and human). 

And that’s really what the aforementioned (did I mention it was awesome, already?) is talking about: L7 LB. Whether it’s hardware or software, in the cloud or on-premises, isn’t really all that important. That’s an operational detail that is (or should be) irrelevant when we’re talking about architecting a scalable application composed of “services and instances.”

I cannot reiterate often enough the importance of L7 LB as part of modern application architectures. And it’s exciting to see the dev and ops side of the world starting to shout the same thing as they encounter the operational challenges of scale and routing amidst a highly interconnected and interdependent set of services that are the foundation for apps (and business) today.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infra...
10ZiG Technology is a leading provider of endpoints for a Virtual Desktop Infrastructure environment. Our fast and reliable hardware is VMware, Citrix and Microsoft ready and designed to handle all ranges of usage - from task-based to sophisticated CAD/CAM users. 10ZiG prides itself in being one of the only companies whose sole focus is in Thin Clients and Zero Clients for VDI. This focus allows us to provide a truly unique level of personal service and customization that is a rare find in th...
Signs of a shift in the usage of public clouds are everywhere. Previously, as organizations outgrew old IT methods, the natural answer was to try the public cloud approach; however, the public platform alone is not a complete solution. Complaints include unpredictable/escalating costs and mounting security concerns in the public cloud. Ultimately, public cloud adoption can ultimately mean a shift of IT pains instead of a resolution. That's why the move to hybrid, custom, and multi-cloud will ...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
Between the mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at @DevOpsSummit at 19th Cloud Expo, Charles Kendrick, CTO at Isomorphic Software, presented a revolutionary model enabled by new technologies. Learn how business and develop...
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Lo...
Serverless applications increase developer productivity and time to market, by freeing engineers from spending time on infrastructure provisioning, configuration and management. Serverless also simplifies Operations and reduces cost - as the Kubernetes container infrastructure required to run these applications is automatically spun up and scaled precisely with the workload, to optimally handle all runtime requests. Recent advances in open source technology now allow organizations to run Serv...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
Take advantage of autoscaling, and high availability for Kubernetes with no worry about infrastructure. Be the Rockstar and avoid all the hurdles of deploying Kubernetes. So Why not take Heat and automate the setup of your Kubernetes cluster? Why not give project owners a Heat Stack to deploy Kubernetes whenever they want to? Hoping to share how anyone can use Heat to deploy Kubernetes on OpenStack and customize to their liking. This is a tried and true method that I've used on my OpenSta...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
Emil Sayegh is an early pioneer of cloud computing and is recognized as one of the industry's true veterans. A cloud visionary, he is credited with launching and leading the cloud computing and hosting businesses for HP, Rackspace, and Codero. Emil built the Rackspace cloud business while serving as the company's GM of the Cloud Computing Division. Earlier at Rackspace he served as VP of the Product Group and launched the company's private cloud and hosted exchange services. He later moved o...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...